text
stringlengths
0
1.19M
·
rD and ρ(rQ, rT, rD) =
ρQ
·
rQ
+
ρT
·
rT
+
ρD
·
rD, where Young’s moduli and densities, comprised 100% of Q, T and D by EQ, ET, ED, ρQ, ρT and ρD. The fitting result () shows that the Young’s modulus of the SiOC:H is dominated by the concentration of basic building block Q and T, and the concentration of the basic block D does not significantly influence the Young’s modulus but will affect the density.Since the molecular model of low-k can approach the experimental trend, the interfacial model for low-k/silica is then established. represents a molecular simulation for interfacial system, amorphous silica and low-k. a illustrated a system with two material. b shows two molecule approach to each other due to applying the artificial charge. The interfacial model, which the covalent bonds are formed between two molecules, is illustrated in c. The molecule is subjected to a fixed boundary at the bottom, and a prescribed displacement with constant velocity is applied at the upper of the atoms (silica molecule). All interface models are simulated by the commercial MD solver discover (version 2005.2) a demonstrates a MD computation on the interface of crystalline silicon dioxide and SiOC:H film. Two simulations are performed in order to identify the influence of the covalent bonds. The bottom of the SiOC:H film is fixed and the prescribed motion is applied on the SiO2 film at the normal direction to the interface. The simulation results are shown in b, where the normalized force and distance are defined as the obtained force/gap distance divided by the reaction force and gap distance of case without covalent bonds. The simulation reveals that the model with chemical interaction will exhibit smaller the equilibrium gap distance than the one without chemical interaction. The model with chemical interaction also exhibits much higher peak interface interaction at normal direction than the one without, as well as the area below the curve. Theoretically speaking, the existence of the covalent bond will reduce the total potential energy of the system. The formation of the covalent bond can not be achieved without extra energy inputs. However, because the crack will seek the weakest part of the system and propagate along this direction, increasing the amount of the covalent bond may not improve the adhesive strength of the system, if the weakest part is inside the material.a illustrated a reaction force versus applied displacement relationship of larger silica/low-k model, which is illustrated in b. Note that the amorphous silica is used, in order to approach the TEOS in the IC back-end structure. A significant force drop can be identified after covalent bond broken, which is shown in the dash line in a. The simulation result indicates that the broken of the covalent bond, as well as the crack propagation, is at the weakest position of the current atomic configuration.The simulation results indicate that the delamination is not always due to the broken of the covalent bonds at interface. Instead, the failure at low-k material and amorphous silica has often been observed in this model. The multiple crack propagation path is illustrated in b. Under the loading/boundary conditions which are illustrated in b, the crack (i.e., broken of Si–O bond) will initialize at point A (in the low-k material) and propagate along both directions. The secondary crack initial location is at point B, and following point C at the amorphous silica material. The failure order of A, B and C is illustrated in b. Notably, a pore, located at point D, induces the first and second crack initial point A and B, instead of the interface of two molecule. After this interface is totally delaminated, the simulation result shows that residue of the low-k material (R–Si–R, R = –CH3 or –OH) can be detected on the surface of silica side, which has been experimentally measured in b, are almost parallel to the interface (pink dashed line) we established, but not overlap it. This result suggests that the interface is not the weakest area but the low-k material is. Moreover, this modeling discovery responds the experiment result In this paper, a prediction method for mechanical/interfacial strengths of amorphous low-k material (SiOC:H) is presented. The molecular dynamics (MD) method is used because the atomic structure can be described as well as the interaction force between atoms. Before the simulation of the interfacial strength, an engineering approach is applied to model the chemical configuration at the interface. The same simulation procedure, which used in the fracture strength simulation, is applied to obtain the interfacial strength as well as the fracture (delaminated) energy. Moreover, the unique modeling technique for the atomic status at molecular interface is presented, and the interfacial strength can be obtained. The simulation results also imply that the existence of the covalent bond has a significant influence to the interfacial system in the silica/low-k system. Moreover, the delamination is not always due to the covalent bonds at interface, but the failure in the soft material (e.g., SiOC:H). The simulation results indicates that the crack will be initialized around the pore in the low-k material. Therefore, the interfacial strength can be improved by enhancing local stiffness of few nm thickness of the low-k material near the interface and preventing the pore near the interface. Increasing the amount of covalent bond at interface will contribute less, because the crack will propagate along the weakest part of the interfacial system. However, there is an enormous challenge to predict the initiation and propagation of an interfacial system in the nano-scaled, due to unknown of defects of the material and interface, lack of sufficient force field, lack of robust multi-scaled modelling technique and lack of exact chemical configuration of interface.E. Phase diagram, prediction (including CALPHAD)Phase analysis of Mg–La–Nd and Mg–La–Ce alloysThe ternary solubilities and solidification details of the Mg–Nd–La and Mg–La–Ce systems were examined through the evaluation of two alloys from each system. Thermodynamic parameters of the two systems were optimized using the observed phase constituents and measured ternary solubilities by SEM combined with the DSC signals of the four alloys. Isothermal sections at 500 °C and solidification paths of the two systems were evaluated.► Ternary solubilities in both Mg–Nd–La and Mg–La–Ce systems were revealed. ► Thermodynamic descriptions of the two systems were developed. ► Isothermal sections at 500 °C and solidification paths of two systems evaluated. ► As-cast microstructures can be well explained aided by thermodynamic calculations.E. Phase diagram, prediction (including CALPHAD)In recent years there has been considerable interest in the addition of rare earth (RE) metals to magnesium and its alloys to improve the mechanical properties, high temperature creep resistance in particular. The RE elements are generally added to Mg alloys in the form of mischmetal, a mixture of various RE metals. Commonly used mischmetal consists mainly of the light RE metals, i.e. La (25–34%), Ce (48–55%), Nd (11–17%) and small amounts of Pr and Sm. It has been realized that the individual RE elements may exert distinctly different effects on the mechanical properties of Mg alloys. For example, Nd-rich alloys have substantially improved creep performance over Ce-rich or La-rich alloys Understanding the phase equilibria and the intermetallic phases that form in Mg–RE systems is very important, as the identity, amount and morphology of the consequent eutectic appears to affect properties such as tensile, impact and corrosion Literature data detailing the phase equilibria and thermodynamic parameters of ternary Mg–RE1–RE2 systems are very limited. The only available experimental data in literature for Mg–La–Ce system is given by Rokhlin and Bochvar Efforts have been made by the present authors on the thermodynamic assessment of the common ternary Mg–RE1–RE2 systems, i.e. Mg–Ce–Nd, Mg–La–Ce and Mg–La–Nd, based on the phase analysis of several critical alloys in both the as-cast and heat treated conditions. The phase analysis and thermodynamic assessment of the Mg–Ce–Nd system was reported in a previous paper A number of selected Mg–La–Nd and Mg–La–Ce alloys have been produced and used for phase identification in the as-cast and heat treated state to provide data on phase equilibria for the thermodynamic model. The alloys were prepared from the appropriate mixtures of 99.9% pure magnesium (Shanxi Yinguang) and 99.5% pure elemental Ce, La and Nd (China Rare Metal Material Co. Ltd.) by induction melting in a mild steel crucible under a protective argon atmosphere and cast into a steel mould. The designation and analyzed chemical compositions of the alloys are shown in . To determine the equilibrium phases present in the alloys, samples of the as-cast alloys were also sealed in silica tubes and annealed at 500 °C for 100 h, followed by water quenching.The phase compositions of the alloys were analyzed using a JEOL JSM 7001F field emission scanning electron microscope (SEM), equipped with a Bruker Quantax energy dispersive X-ray spectroscopy (EDS) system. X-ray diffraction (XRD) analysis was also carried out on the alloys to confirm the phase identification by EDS. For XRD analysis, the samples were measured using a Bruker D8 X-ray diffractometer.Pieces weighing about 250 mg taken from representative areas inside the as-cast alloys A1 to B2 described in were sealed under an argon atmosphere by welding in thin-walled tantalum capsules to avoid evaporation and oxidation. No reactions between the Ta-capsules and the samples were observed.After testing the gas tightness of the tantalum capsules in a separate furnace, the samples were measured by differential scanning calorimetry (DSC) in a heat-flux cylindrical Calvet-type calorimetric system Multi HTC 96 (Setaram, Caluire, France). The equipment was calibrated using pure Cu, Ag and Mg sealed in tantalum capsules. Helium, at a flow rate of 2 l/h, was used as the analysis chamber gas. The reference Ta-capsule was also sealed by welding, and a sapphire cylinder was used as the reference material. The sapphire mass was 492.5 mg, which represented a good balance to the heat capacity of the sample. The samples were first heated to 700 °C into the molten state with heating rate of +10 K/min and then cooled down/heated up in the following cycles: −5 K/min, +5 K/min, −1 K/min and +1 K/min. The results were observed to be consistent and reproducible, with the difference between the corresponding heating and cooling peaks being below 4 K. The overall uncertainty of DSC measurements for temperature determination was estimated as ±3 K. shows the phase identification by SEM-EDS and XRD in A1 alloy in the as-cast state and after annealing at 500 °C for 100 h. Four phases were indentified in the as-cast sample (), including RE2Mg17, REMg12, REMg3 and RE5Mg41. The REMg3 phase appears to reside within the RE5Mg41 phase, as would be expected by a peritectic or peritectoid reaction, where the transformation is limited by diffusion through the reaction product (b)), there appears two distinct phases. Whilst the dark phase could be identified as RE5Mg41 by EDS analysis, the bright phase was found to contain mainly RE and could not be identified consistently as being any of the possible Mg–RE phases. The EDS results were confirmed by the XRD analysis, from which only the RE5Mg41 phase was identified.There have been reports on the presence of RE-rich particles in Mg–RE based alloys, in particular when the alloys were solution treated or annealed at high temperatures The Mg–RE phases indentified for the A2 alloy are shown in . Similar to those obtained for alloy A1, RE2Mg17, REMg12, REMg3 and RE5Mg41 were identified in the as-cast sample (REMg12 was difficult to observe by SEM due to its small volume fraction, but the XRD analysis indicated its presence). For the sample annealed at 500 °C, two phases were indentified, one being RE5Mg41 and the other being REMg3.The ternary solubilities of the Mg–La–Ce phases at 500 °C extracted from the SEM/EDS results are given in shows the phase identification by EDS and XRD in B1 alloy in the as-cast state and after annealing at 500 °C for 100 h. For the as-cast sample, three Mg–RE phases were identified by the EDS analysis, including RE2Mg17, REMg12 and REMg3. The presence of RE2Mg17 and REMg12 was confirmed by the XRD analysis, but REMg3 was not revealed, probably due to its small volume fraction. It was noted that the RE2Mg17 phase has more La than Nd, whilst the REMg12 phase has more Nd than La. After annealing at 500 °C, the REMg12 phase remained but the RE2Mg17 phase was replaced by RE5Mg41. Interestingly, the REMg12 phase is now rich in La with regard to RE. The RE5Mg41 phase has more Nd than La.The results obtained for the B2 alloy are shown in . Examination of the as-cast sample revealed the presence of four phases including RE2Mg17, REMg12, REMg3 and RE5Mg41. The formation of the RE5Mg41 phase is considered to result from the higher Nd content in this alloy as compared to the B1 alloy. Again, the REMg3 phase is seen to reside within the RE5Mg41 phase, indicating a possible peritectic reaction. Similar to those in the annealed B1 sample, REMg12 and RE5Mg41 were indentified to be the equilibrium Mg–RE phases in the B2 alloy after annealing at 500 °C.It appears from these results that La tends to partition to the RE2Mg17 phase whilst Nd tends to partition to the REMg12 phase during the solidification process in the Mg–La–Nd system.The ternary solubilities of the Mg–La–Nd phases extracted from the EDS results are given in Temperature events obtained from the thermal analysis, as extracted from the DSC signals, are given in Tables , for the Mg–La–Ce and the Mg–La–Nd systems respectively. Each heating/cooling signal is based on two repeated cycles of a single sample. Invariant reactions were identified from the observed peak shape. For heating cycles, the onset temperature was taken for invariant reactions and the peak maximum for all other signals. For cooling cycles, the transition temperatures were always evaluated from the onset temperature. The interpretations of the experimental temperatures in the last two columns of the tables are based on the present thermodynamic equilibrium calculations.The thermodynamic parameters of the constituent binary system Mg–Ce was taken from previous work by the authors In the binary Mg–La system the thermodynamic parameters for LaMg3, LaMg2, LaMg and Liquid were taken from The enthalpy/entropy-relation of the binary parameters for LaMg12 and La2Mg17 (given in ) were optimized considering the systematic trend of binary RE–Mg phases. The resulting Mg-rich part of the calculated Mg–La binary phase diagram, together with the experimental data for the solubility of La in (Mg) of The binary systems Ce–La and La–Nd were extrapolated as ideal solutions of the liquid and the structurally identical solid phases without any binary interaction parameters.All binary compounds form large ternary solid solutions. This is because similar phases in each of the binary Mg–RE systems share the same crystal structure, with the majority of them showing a series of continuous solid solutions. These phases are modelled with two sublattices and a substitutional solution on the RE-sublattice, (Nd,Ce,La)m(Mg)n. Continuous solid solutions in all ternary systems exist for both the cF16-BiF3-type REMg3 phase and the cP2-CsCl-type REMg phase. In that case, the Gibbs energies of the CeMgm, LaMgm and NdMgm phases correspond to real (stable) phases and are taken directly from the binary description of the Mg–Ce . The obtained ternary parameters for the Mg–La–Ce and Mg–La–Nd systems are given in . The calculated ternary isothermal sections at 500 °C for the Mg–La–Ce and the Mg–La–Nd systems are given in Figs. In this system the phase equilibria in the Mg-rich corner at 500 °C are constrained to (Mg) + REMg12 by the continuous solid solution of REMg12 (see ). Also the phases REMg3 and REMg show continuous solid solutions. RE5Mg41 (stable at the Mg–Ce edge) and RE2Mg17 (stable at the Mg–La edge) show large ternary solubilities, limited by the two-phase equilibrium RE5Mg41 + RE2Mg17. Adjacent, two three-phase equilibria exist at 500 °C: RE5Mg41 + RE2Mg17 + REMg12 and RE5Mg41 + RE2Mg17 + REMg3.Three invariant reactions with the liquid phase are calculated in the Mg–La–Ce system:RE2Mg=L+REMg+REMg3at726°CL+RE2Mg=REMg+REMg3at663°CL+REMg3+RE2Mg17=RE5Mg41at626°CThe high temperature C15-Laves-type phase RE2Mg is not stable below 619 °C, corresponding to its lowest eutectoid decomposition temperature which is in the binary Mg–Ce system. The ternary peritectic reaction at 626 °C (liquid composition 37.6 wt.% Ce and 0.5 wt.% La) connects to the three-phase equilibrium RE5Mg41 + RE2Mg17 + REMg3 shown in . The other three-phase equilibrium, RE5Mg41 + RE2Mg17 + REMg12, does not prevail up to the liquid region but ends in the binary edge system Mg–Ce at 609 °C in the eutectoid RE2Mg17 = REMg12 + RE5Mg41. None of these ternary invariant reactions involve the (Mg) phase, which solidifies in the monovariant eutectic reaction L = (Mg) + REMg12, stretching across the ternary from 611 °C (Mg–La) down to 595 °C (Mg–Ce), the lowest liquidus point in this ternary system.The calculated solidification sequence is exemplified for alloy A2 in , showing the enthalpy variation for a better comparison to the thermal signals. Under equilibrium conditions, after precipitation of some primary REMg3, the main solidification occurs in the narrow temperature range from 635 to 630 °C, producing 94% RE2Mg17 and 6% REMg3. Subsequent solid state transformations result finally in 96% RE5Mg41 and 4% REMg3 at 500 °C. Under Scheil conditions the freezing range extends down to 595 °C, as indicated by the dashed line in , in addition the start of each new solidification stage with new phases is also indicated by an arrow. At 595 °C the phase fractions are 87% RE2Mg17, 7% REMg3, 5% REMg12 and less than 1% (Mg). This compares reasonably well with the primary REMg3 in the as-cast microstructure in compares well with the calculated equilibrium phases consisting of a majority of RE5Mg41 and some REMg3. The phase RE2Mg17, also found in the as-cast microstructure, actually transforms, as required by the equilibrium state. As it is a peritectoid reaction, the extent of the reaction is limited by diffusion through the reaction product RE5Mg41, which means that some unreacted RE2Mg17 remains in the microstructure. The as-cast microstructure is, thus, somewhat between the Scheil and equilibrium conditions. This is corroborated by the fact that the strongest thermal signal on cooling at 630 °C is also located between those two conditions, as expected by the steepest drop in the enthalpy curves in . The REMg12 phase is formed only under Scheil solidification conditions at 611 °C, confirmed by a weak DSC signal and the XRD trace of the as-cast alloy in Alloy A1 follows essentially the same solidification path as alloy A2. After Scheil solidification terminates at 595 °C, the phase fractions are 92% RE2Mg17, 6.5% REMg12, 1% REMg3, and less than 1% (Mg). The larger fraction of REMg12, is now also seen in the as-cast microstructure, . As with sample A2, RE5Mg41 is formed via the peritectoid reaction and some RE2Mg17 is retained in the microstructure. The equilibrium solidus is at 610 °C and solid state transformations in the equilibrium case result finally in 86% RE5Mg41 and 14% REMg12 at 500 °C.The limit of the large ternary solubility of RE5Mg41 is taken from the maximum value of the SEM/EDS experimental data observed in that phase, given in . Even though the RE2Mg17 phase is not present at equilibrium in the alloys A1 and A2, the corresponding value of the Ce solubility in RE2Mg17 could be taken approximately from the as-cast samples in which that phase occurs. These solubility limits were also used to determine the thermodynamic parameters of these two phases and reasonable agreement with the calculated data is obtained, as shown in It is emphasized that no ternary interaction parameters have been used for the liquid phase. The following comparison to the experimental data of , to dominant primary crystallization of the intermetallic phases REMg12 and RE2Mg17 in . In the section with 75 wt.% Mg, the RE2Mg17 is formed as a primary phase at more than 3 wt.% La, in good agreement with the phase identification reported by . Essentially, the agreement is better than ±10 K which is considered satisfactory in view of the experimental challenges for the Mg–RE systems.), the phase REMg12 shows a large, though limited, ternary solubility of Nd but is not stable at the binary Mg–Nd edge. Also RE5Mg41 (stable at the Mg–Nd edge) and RE2Mg17 (stable at the Mg–La edge) show large ternary solubilities. The solubility limits are compiled in . The phases REMg3 and REMg again exhibit continuous solid solutions. Three three-phase equilibria exist at 500 °C above 20 wt.% Mg: (Mg) + RE5Mg41 + REMg12, RE5Mg41 + RE2Mg17 + REMg12, and RE5Mg41 + RE2Mg17 + REMg3. Most important is the critical composition at the (Mg)-apex of the triangle (Mg) + RE5Mg41 + REMg12, which is 0.86 wt.% Nd and 0.03 wt.% La. At this point the intermetallic phase in equilibrium with (Mg) switches from REMg12 (for 0 to 0.86 wt.% Nd in (Mg)) to RE5Mg41 (0.86–2.17 wt.% Nd in (Mg)) at 500 °C.Five invariant reactions with liquid phase are calculated in the Mg–La–Nd system:RE2Mg=L+REMg3+REMgat726°CL+RE2Mg=REMg3+REMgat697°CL+RE2Mg17+REMg3=RE5Mg41at601°CL+RE2Mg17=REMg12+RE5Mg41at585°CL+REMg12=(Mg)+RE5Mg41at570°CEven though (Mg) is involved in the last reaction at 570 °C this is not the liquidus termination because this reaction is not a eutectic but a transition type reaction. The lowest liquidus point in the Mg–La–Nd system coincides with the binary Mg–Nd eutectic, L = (Mg) + La5Mg41, at 548 °C.For alloy B1 the Scheil solidification terminates at 548 °C, after the precipitation of several phases in the sequence RE2Mg17, REMg12, RE5Mg41, and (Mg). The solid phase fractions at 548 °C are 53% RE2Mg17, 20% (Mg), 19% RE5Mg41, and 8% REMg12. The sequence of primary RE2Mg17 and secondary REMg12 is also seen in the as-cast microstructure, . The equilibrium solidus is at 585 °C and solid state transformations, in the equilibrium case, result finally in 79% REMg12 and 21% RE5Mg41 at 500 °C. This agrees well with the heat treated microstructure of alloy B1 in . Alloy B1 is located close to the border of the three-phase equilibrium REMg12 + RE5Mg41 + RE2Mg17 in For alloy B2 the calculation shows primary solidification of RE2Mg17 followed by secondary REMg3 until the reaction L + RE2Mg17 + REMg3 = RE5Mg41 is encountered at 601 °C. Under Scheil conditions such a true peritectic reaction cannot proceed and the 25% RE2Mg17 and 2% REMg3 are ‘frozen-in’. Scheil solidification proceeds with precipitation of RE5Mg41 and (Mg). The REMg12 does not appear to precipitate from the liquid. However, as seen in , it is a part of the equilibrium microstructure. Hence it appears that there has been a partial transformation towards the equilibrium microstructure after solidification, particularly because the REMg12 sits on the boundary between the RE2Mg17 and RE5Mg41 phases. The equilibrium solidus is at 570 °C and peritectic reactions as well as solid state transformations in the equilibrium case result finally in the calculated phase fractions of 87% RE5Mg41, 9% REMg12 and 4% (Mg) at 500 °C. The former two phases are indeed observed in the microstructure in Thermodynamic assessment of the Mg–La–Ce and Mg–La–Nd systems have been completed and compared with critical microstructures in both the as-cast and heat treated conditions. It was found that there is complete solubility of Ce and La in the REMg3 and REMg12 phases, together with extensive but limited solubility in the RE2Mg17 and RE5Mg41 phases in the Mg–La–Ce system. In the Mg–La–Nd system extensive but limited solubility was found for each of the REMg12, RE2Mg17 and RE5Mg41 phases, with complete mixing in the REMg3 phase. The thermodynamic predictions matched well with the phase determination of the samples solution treated at 500 °C. In the as-cast samples, however, up to five different phases could be observed at any one time. It appears that this is due to a combination of segregation during solidification, peritectic reactions in the remaining liquid, and peritectoid and other solid phase transformations after solidification. Using these considerations the as-cast microstructures can be well explained by thermodynamic calculations.A Decoupling Control Model on Perturbation Method for Twin-Roll Casting Magnesium Alloy SheetTo better understand the twin-roll casting process, based on the analysis of the solidification phenomenon, the geometry shape of the molten metal pool, the continuity of metal and the balance of energy and momentum, five critical partial equations were established separately including the equations of pool level, solidification process, roll separating force, roll gap and casting speed. Meanwhile, to obtain a uniform sheet thickness and keep a constant roll separating force, a decoupling control model was built on the perturbation method to eliminate the interference of process parameters. The simulation results show that the control model is valuable to quickly and accurately determine the control parameters. Moreover, Mg alloy sheets with high quality were cast by applying this model.The strip casting combines two processes of continuous casting and hot rolling, which offers many advantages, such as less space requirements, lower investment cost, energy saving and lower atmospheric emissions compared with conventional continuous casting proposed the basic concept of the twin-roll casting. While it had been a long time to implement the idea because there were many problems in control technology, measurement devices and theoretical model. In the last two decades, with the development of the related technology, twin-roll strip casting gradually has been a hot topic in the metal cast-rolling field. Bernhard et al. described the automation of a twin-roll laboratory caster and developed a non-linear state-space process model which represented the dynamics of the solidification and forming process. Cao et al. established the mathematical model of rolling force on the basis of viscous fluid mechanics and traditional hot rolling model, meanwhile, an intelligent algorithm was used to predict the rolling force. Some researchers considered the molten steel level control and respectively offered different fuzzy controller to solve the problem about non-linear uncertainty and time-variable in the casting processThrough analyzing the highly interaction and nonlinearity among the control variables in the casting process, John et al. derived a 3 × 3 linearized model for control analysis, and the model was simplified to a 2 × 2 size on a justified basis. The model, which was calibrated by a pure static model experimentally validated, was considered to be a good approximation of the casting process during steady state and offered an important reference for the process control. Hong et al. investigated a two-level control strategy of the twin-roll strip caster, in which the low level part was designed to control the gap, the pool level and casting speed, respectively, and the high level controller supervised the overall control performance that generated appropriate reference signals to the low level controllers. The simulations show the control strategy is very effective to handle the multi-variable non-linear casting control problem. Although the above results make large contribution to simplifying model and the overall control strategy, they are very difficult to be applied in the real caster control due to too complicated theory or long data exchange time.In the twin-roll strip casting process, these control features including complicity, non-linear, couple and time-delayed greatly restrict the industrialization of this new technology. In this study, firstly, based on analysis of the twin-roll casting process, the five physical equations are built, including the equations of molten metal level, solidification process, roll separating force, roll gap and casting speed. Secondly, in order to improve the product quality, the roll gap and the roll separating force keep constant. To fulfill the control requirement, a decoupling linearized overall twin-roll strip casting control model based on the perturbation method is established. Finally, the simulations based on the above model, give several appropriate control parameters. High quality Mg alloy sheets with the uniform thickness and good microstructure distribution have been produced by applying these parameters.In the casting process, solidification of molten metal is completed rapidly and the process window is narrow. Small variation of the process parameters leads to severe defects of the casting strip, even leak of molten steel or break of the strip. So, to obtain a good control result and high quality strip, it is necessary to build particular process models.A vertical twin-roll caster has been developed to produce thin strips continuously at thickness from 1 to 4 mm at casting speeds from 5 to 60 m/min in the Magnesium Alloy Cast-rolling Engineering Research Center of University of Science and Technology Liaoning. The schematic diagram is shown in Molten liquid metal is poured from a tundish through a submerged nozzle into the wedge-shaped pool formed by two rolls rotating in opposite direction and two side dams. Once the liquid contacts the surface of rolls which are internally cooled with the circulated water, a thin solidification layer is formed and gets thicker as the two rolls rotate. At last, both two sides of shells weld together at a position above the roll nipper also called the kiss point. The flow rate of the liquid metal from the nozzle into the pool is controlled by adjusting the height of the stopper driven by a micro servo motor. The roll gap is adjusted by a hydraulic servo system. The casting speed is controlled by a DC-motor.In this section, the control equation for the molten metal level is described according to literaturewhere Qin and Qout are the input flow and output flow of the pool between the two rolls, respectively.Qin−Qout=LdSdt=L[HdGdt+(G+2R−2R2−H2)dHdt]where L, G, R, H are the roll width, the roll gap, the roll radius and the liquid metal pool height, respectively.The input flow Qin is simplified as a proportion loop of the stopper opening height hs, that is presented as Qin ≈ kshs, where ks is determined empirically. The output flow Qout is given as Qout = LGωR, where ω is the casting angular speed. So the molten metal level equation is described as,, the molten metal level H can be adjusted by the stopper height hs, but the roll gap and the casting speed have strong coupling relationship with the level.On the basis of several research results, in Lagrangian description, the thickness δ of the shell is described aswhere t is the consumed time for which the metal particle has moved from the solidification start point A/B to the current position, τ is a time delay that can be calculated by pouring temperature, metal solidification temperature and metal cooling rate, C and β are coefficients determined empirically. At the kiss point, the thickness of solidification layer is obtained:where tk is the time for which the particle has traveled from point A/B to the kiss point. Because the solidification process is very rapid and tk is small, the casting angular speed ω can be assumed as the invariant during this process, so the following equation is gottenwhere θh is the angle between X-axis and OA→, θk the angle between X-axis and OKp→., the solidification process equation is obtained as, tk can be calculated by numerical method, moreover, θk denoting the location of Kp, can be decided in Eq. , these variables including the metal pool level H, the roll speed ω and the roll gap G, play important roles in determining θk.Some equations describing the roll force in hot rolling are available in literatureAt the random differential section of the solid zone shown as the shadow section in , there exist the specific pressure per unit arc Pn and the specific shear force τn. The contact arc-length between the roll and the differential unit, which is ds ≈ dy/cosθ, and θ is the angle between X-axis and the line connecting the coordinate origin and the touch point. The roll separating force F is the horizontal resultant force at the contact surface between the roll and the differential section, so F is described aswhere L is the roll width, Hk the height of the kiss point. Compared to Pn, τntanθ is smaller and it is neglected, then Eq. At the contact surface between the roll and the differential section, the vertical resultant force is described as 2(Pntanθdy−τndy), and the vertical resultant stress of the differential section is presented as (σy + dσy) (x + dx)−xσy. Based on the equilibrium equation of the static force, the 2° differential variables are neglected, then Eq. Similarly, the algebraic sum of the horizontal forces on the differential section, Pndy−τntanθdy−σxdy should be also zero. Omitting τntanθ, the equality can be derived, σx = Pn.Based on Mises yield criterion, the relationship of the stress on the horizontal and vertical direction is given asσx−σy=βσsβ=23+μσ2μσ=σz−(σx+σy)/2(σx−σy)/2where β is intermediate main stress influence coefficient, μσ Lode stress parameter, σs the metal yield strain when metal is extended or compressed only in the one direction., dσx = dσy, this equality is gotten. Considering the cast technology characteristic that metal deformation centralizes nearby the roll exit, then dx ≈ 0 and x = G (G is roll gap). Meanwhile, assuming that the casting metal deformation meets the maximum friction condition, namely τn = K/2, then Eq. where K is the casting metal yield strength, which varies with temperature and can be determined empirically.On the interval of [y, Hk], linearizing Eq. and considering R >> G, the separating force equation is derived as that the roll gap and the location of the kiss point should be constant in order to keep the roll separating force steady.In the caster, a DC-motor drives two rolls synchronized through a gear transmission unit. The casting speed control equation in Laplace transformation is given byUa−RaCmMc−LaCmMcs=[LaCmJms2+(LaBmCm+RaCmJm)s+(RaCmBm+Ce)]ωwhere Ua is armature voltage, ω is motor angular speed, La, Ra are armature inductance and resistance, respectively, Ce is proportional coefficient of winding back electro motive force, Cm is torque coefficient, Jm, Bm, Mc are the roll moment of inertia, the viscous friction coefficient and the load torque, respectively., the casting speed ω is determined by the control voltage Ua. And the load torque Mc is a disturbance.The strip thickness is directly determined by the roll gap, so the roll gap control must be quick and exact. In the caster, one roller is fastened and the other one is adjusted by a hydraulic servo system, which mainly consists of servo valve, hydraulic cylinder and displace sensor. Based on our previous work, the piston rod displacement equation of the hydraulic cylinder Y(s) and the transfer function of servo valve Gv(s) are described asY(s)=KqA1Xv(s)−(Kce+Vtβefs)F(s)A12VtMtβefA12s3+(VtBpβefA12+KceMtA12)s2+swhere Xv is the valve-core displacement of the servo valve, F the roll separating force. Other symbols are expressed in Finally, all the system parameters used for simulation in section There are two control objectives in the casting process. One is to obtain a uniform desired thickness of the strip, which means the gap of rollers should be stable, and the other is to keep a constant roll separating force. However, the physical equations in section are highly non-linear and couple, it is very difficult to achieve the control objectives using the classic control method. In this study, the whole control model of the casting process is established using perturbation method to explore the control rules, in which the thickness of the shell is taken as the perturbation parameter. In the actual application, it is easy to keep the cast speed stable by a DC double closed-loop speed adjusting units with load torque observers. Meanwhile, the pool level is better controlled by fuzzy methods, then considering the whole casting process control performance, the bottleneck is how to decouple the gap control equation of hydraulic servo system and the roll separating force serial equations.whereK1 = Kq / A1,K2=KceA12(1+VtβefKces),LH=VtMtβefA12s3+(VtBpβefA12+KceMtA12)s2+sBecause the liquid region has little influence on the roll separating force, so F0 can be neglected. According to Eq. , the maximum roll separate force is derived asWhen both the force aroused by the hydraulic cylinder and the roll separating force reach a momentary balance, the displacement of the hydraulic cylinder Y(s) is equal to the roll gap G(s), then substituting Eq. In the period of pouring time when the liquid metal is just poured on the rollers, these parameters including the location of the kiss point and the thickness of the shell at the kiss point, are changed dramatically. When the liquid metal pouring quantity is stable, the gap is gradually controlled, meanwhile these two parameters are extended to be stable and the roll force also keeps constant. So both the location of the kiss point and the thickness of the shell, play important roles in the stability of the roll force and the roll gap control. On the above analysis, to solve the non-linear couple problem in Eq. , the thickness δk is regarded as the perturbation variable, that is substituted into Eq. G(s)LH(s)+K2KL2δkR2+K2KL2δkRG(s)=K1Xv(s), let the interference item K2KL2δkR2 multiply G(s)/δk, that forms the second degree interference, then Eq. Gc(s)=G(s)Xv(s)=K1LH(s)+K2KLR2δk+K2KLR22δk2, the scope of the perturbation variable is described as G/2 < δk <(0.414 × R−0.707G). Set the set-point of the roll gap G as 0.001 m, and the value of δk is 0.03 m. According to the values listed in , the system transfer functions Gv(s), Gc(s) are obtained as follows:Gc(s)=1.176×107s3+104.4s+3.85×106+3.258×108In order to better analyze the casting process and get proper control parameters, a PI controller is adopted and a series of simulations are fulfilled. The control objective is the roll gap affected by the roll separating force. To consider the influences of the overshoot and the adjusted time on the control result, a comprehensive evaluation index of the control performance is defined aswhere σ, ts are the overshoot and the adjusted time of the unit step response, respectively. If the control curve is divergent or steady-state error exists, the control result is regarded as the worst and is omitted. The proportion coefficient Kp is discretized in the scope [10, 100] and step is 10. The integral coefficient Ki is discretized in [0.1, 1] and step is 0.1. With the different values of Kp, Ki, the comprehensive evaluation indices of the control performance are shown in When the PI parameters are in the shadow area of , for example, when Kp is 0.5 and Ki is 40, the unit step response curve of the roll gap is shown in . At the beginning time of the curve, the roll gap fluctuates dramatically because both the gap error and the changing rate of the gap error are big, and with the gradual decrease of those two parameters, the roll gap tends to be stable. After 0.15 s, the gap basically keeps at the set-value 1 mm. Those show that the control performance is satisfactory.The experiment is validated on the test caster using the appropriate PI parameters obtained from the simulation. The casting metal is Mg alloy (AZ31, with composition of Al 3.17%, Zn 0.902%, Mn 0.31%, Cu 0.03%, Si 0.1%, balance Mg), the casting temperature 680 °C. The casting speed and roll gap curve are shown in , and the casting separating force curve is shown in illustrate the response curves of these key parameters at the temporary stage, in which the main objective is to form the molten metal pool and make the process parameter reach the range of the control preset value as quickly as possible. Meanwhile the gap PI controller does not work to avoid leak of molten metal because of the big roll gap, and the process parameters change violently. From 0 s to 0.5 s, the casting speed keeps 5 m/min, the casting force keeps the proper work pressure (13.5 kN) of the hydraulic system that make two roll touch closely and the roll gap is zero. From 0.5 s to 1 s, the liquid metal is poured into the casting space, and when the molten metal touches the surface of the rolls, the metal inner dendrite grows gradually because of the solidification. When the dendrite force is bigger than the hydraulic system work pressure, the roll separating force increases to the maximum value 14.5 kN, that is limited by the hydraulic system, and the roll gap is opening gradually. Meanwhile, considering the roll speed control equation, the increase of the casting force brings out the rise of the load torque, therefore the casting speed reduces firstly, then restores to the set-value owing to the adjustment of the roll drive system (as Eq. ), and then go on rising with increasing the speed set-point (From 0.5 s to 2 s, every 0.5 s the roll speed set-point increases by 1 m/min, and the temporary roll gap is recorded in this experiment). From 1 s to 2 s, the location of the kiss point moves toward the roll exit with increasing the casting speed (as ), which makes the rolling path become short, and then the casting separating force and the roll gap reduce gradually.At 2 s, the height of the pool is proper and controlled by a fuzzy controller. The other process and control parameters reach the range of the preset value, then the roll gap PI controller takes action and the casting stable process begins. The roll gap set-point is determined (in this test, the value is 5 mm) according to the sum of the average value among the recorded temporary roll gap values and an empirically adjusted value that is to make roll separating force steady around the desired value. The parts (II) in show the control parameter curves at the stable stage, from which it is found that the gap is basically steady at the set-point and the casting separating force fluctuates around the desired value. The small tremors of the roll gap, the casting speed and the roll separating force are caused by the unavoidable fluctuation of the pool height owing to the characteristics of the fuzzy controller. At the same time, those typical waves at 6 s and 10 s reflect the interaction of these process parameters. At the stable stage, because the solidification region is stable, the material deformation nearby the roll exit presents the thermal elastic–plastic characteristics. So, when the casting speed becomes faster, the casting separating force has a more obvious effect, that leads to smaller roll gap. Contrarily, the decrease of casting speed results in the weakening effect of the casting force. The material elastic restoring force shows obvious effect, and the roll gap becomes also big. that, the change periods of the roll separating force are from 0.1 s to 0.2 s, and then the scope of the dark-bright strip length should be from 13 mm to 26 mm when the casting speed is 8 m/min, that is validated well in The microstructure of the strip is presented in , showing primary α-Mg with different morphologies of Al-rich solid solution and eutectic Mg17Al12. The grain size is about 50 μm and the grains distribute evenly which are mainly equiaxed.The control objectives of the casting process are to get a uniform thickness of the strip and to maintain a stable roll separating force. While, based on the analysis of the five physical equations about the twin-roll casting process, the results show that those key control parameters, such as the roll gap, the casting speed, the roll separating force, the pool level, the location of the kiss point, have strong couple non-linear relationship. It is very difficult to get the satisfactory result by the traditional control methods.After the control parameters are decoupled and linearized by the perturbation method, the overall control model of the twin-roll casting process is established. A series of simulations are carried out to get proper control parameters. Moreover, high quality Mg alloy sheets are produced and the fact indicates that the decoupling control model is valuable in the twin-roll casting process.Multiresolution mechanical characterization of hierarchical materials: Spherical nanoindentation on martensitic Fe-Ni-C steelsSystematic length scale studies are required for understanding effects of microstructural features that determine the mechanical properties of hierarchical materials. Recent advances in spherical indentation stress-strain protocols have made it possible to characterize the local mechanical responses at different length scales, from hundreds of nanometers to hundreds of microns. In this paper, two model martensitic steels Fe-5.1Ni-0.13C (wt.%) and Fe-5.0Ni-0.30C (wt.%) with different carbon contents were investigated using spherical nanoindentation stress-strain curves to quantify the mechanical behavior of lath martensite at multiple length scales using different spherical indenter tip radii. The indentation yield strength is dominated by the nanoscale defect structure for all indenter radii (1 μm, 16 μm and 100 μm) and does not exhibit any discernible size effect in the measured yield strengths at different length scales. The work hardening rates measured in the indentation tests at the different length scales coincide until the indentation zones grow large enough, so that a significant increase of work hardening occurs which is attributed to the presence of high-angle block boundaries in the indentation probed volumes. Characteristic pop-ins were observed in the indentations performed with the 1 μm and 16 μm indenter tip sizes and have been attributed to the interaction of dislocations with lath boundaries and their eventual transmission. In addition, the correlations between the properties measured from these indentation protocols and those measured in uniaxial tensile tests are critically examined.] have been made in recent years in the protocols and equipment used to study the internal structure of materials spanning a multitude of hierarchical length/structure scales (from the atomistic to the macro scale). Methods for reliably measuring the mechanical responses at the relevant hierarchical structure/length scales are critically needed to fully leverage the structure measurements and produce new scientific insights into the underlying structure-property relationships in these materials.The most common mechanical characterization techniques employed in current literature are based on conventional testing methods such as tension []. These techniques have been widely adopted because of the availability of established analyses protocols for estimating intrinsic materials properties from such measurements. However, most of these techniques are not easily adapted or extended to testing at small length scales. In fact, the time and cost involved in small-scale mechanical testing using these same test geometries is very substantial, with the sample preparation itself being a major contributor.Statistics play an important role in the proper interpretation of the results obtained in mechanical testing of small volumes and in correlating the measurements from the smaller volumes to those obtained from the larger volumes. Given the rich heterogeneity of material structure at smaller length scales (microns and sub-micron), it is only natural that a larger number of mechanical measurements are needed to fully capture the natural variance in these measurements. This is because the material microstructure is inherently heterogeneous (with variations in local features such as phase/precipitate size and shape, grain orientations, grain/phase boundaries, dislocation densities). Therefore, it is essential to develop assays that allow sampling of a large number of mechanical responses in advanced materials at the lower length scales within reasonable effort and cost. Only then, it would be feasible to confidently assess the mechanical responses of the material at hierarchical length scales, and explore systematically the underlying length-scale dependent correlations in these measurements.There exist a number of other limitations in the protocols currently employed for mechanical testing of very small volumes (submicron). First, many of the sample preparation techniques introduce surface damage []. Since it is often difficult to correct the measurements for the presence of the damaged surface layer, the reliable extraction of intrinsic material properties from these tests becomes a significant challenge. Second, there is a strong interest in estimating the properties of the individual microscale constituents present in the microstructure. Since these constituents (e.g., single phase/grain regions) can be as small as a few nanometers and may exhibit irregular shapes, it is not easy to produce samples with standardized geometries that allow an estimation of their individual properties using already established analysis protocols.Recent advances in instrumented indentation offer a promising avenue for addressing many of the limitations described above. Taking advantage of the impressive resolutions of modern nanoindentation instruments to measure load, displacement, and stiffness, our research group has developed and demonstrated novel protocols [] that are capable of extracting meaningful spherical indentation stress-strain curves by tracking the local mechanical response from the linear elastic regime to the elastic-plastic regime with small amounts of plastic strain. These protocols have been validated on a broad variety of materials systems and length scales ranging from about 50 nm to about 500 μm to study the contributions of the plastic deformation [] on the local mechanical properties. The measurements at the higher length scales were performed in an instrumented microindenter with customized test protocols []. In addition to recovering the mechanical responses in the form of meaningful indentation stress-strain curves, protocols have also been developed to extract intrinsic material properties from the indentation stress-strain curves [These recent developments in the acquisition and interpretation of spherical indentation stress-strain curves in a broad range of length scales have now set the stage for conducting systematic studies of mechanical characterization of hierarchical materials and understanding the underlying length scale effects. Very recently, such a systematic study was conducted in an α−β Ti alloy []. The study demonstrated the viability and merits of the newly developed protocols. In this paper, we explore the applicability of these protocols in understanding length scale effects in martensitic steels. Martensitic Fe-Ni-C steels have been selected for the present study as they represent an important class of high strength steels, and at the same time present a hierarchical microstructure that fall within the range of the length scales that can be explored by the recently developed spherical indentation protocols.There is a substantial amount of uncertainty in current literature regarding the properties exhibited by the martensitic phase in high strength low-alloyed steels. Most of the previous indentation studies on lath martensite employed a sharp diamond tip and analysis protocols that analyzed the unloading segments after introducing a significant amount of plastic deformation in the indentation zone in the loading segment []. These indentation protocols can only provide estimates of modulus and hardness. It needs to be pointed out that hardness is a measure of flow strength after some amount of nonstandard plastic deformation has been imposed on the sample. Consequently, it is entirely possible that the intrinsic mechanical properties estimated in these protocols are significantly influenced by the indentation itself. In fact, most prior studies on different material systems have reported a size effect with higher strength values measured with shallow indentation depths []. A summary of indentation measurements on lath martensite is presented in . Only a few of these studies attempted to extract the elastic modulus of lath martensite in different alloys. From the reported values of elastic modulus, a large variation in the range of 180–320 GPa is observed. Besides the differences in the chemical composition (most prominently C content) being one of the main sources of the observed variance in the measurements, other factors include variations in the testing and analysis protocols (e.g., the indentations in the different studies were carried out to different maximum indentation loads/depths). The variance in the measured hardness is remarkably large not just from one study to another, but also within a study conducted on a single alloy composition. In some cases, the large variance in the reported measurements was explained as size-effects, where smaller indentation tips and shallower indentation depths produced higher values of hardness. Since the hardness measurements obtained at different indentation loads/depths correspond to different amounts of plastic deformation imposed on the sample, it is impossible to ascertain whether the measured size effect is intrinsic to the material or simply a consequence of the protocols employed. As further evidence, it is noted that investigations [] conducted on various alloy compositions reported almost no size effect with different loads/depths when using a Berkovich tip geometry. However, when the indenter tip was substituted with a Vickers geometry to measure hardness at a larger length scale, lower hardness values were obtained. Generally, from these studies the measured hardness from Berkovich tips were reported to be 1.2 to 2.5 times higher than the ones from Vickers testing. This implies that the geometry of the indenter tip possibly has a significant influence on the hardness measurements. Another inconsistency observed from the previously reported studies can be traced to whether or not the indentations were conducted close to or away from high-angle grain/phase boundaries. In some studies, higher hardness values were reported at locations close to boundaries [], while in other studies the opposite trend was noted []. The effect of carbon content on the hardness of martensite was systematically investigated by Ohmura []. For the same range of C content (0.1–0.5 wt%), Vickers hardness measurements in the range of 4–8 GPa were reported in both studies, despite the significant differences in the compositions of the other alloying elements in the steels studied. This shows that the C content has the greatest influence on the hardness of martensitic steels compared with other substitutional alloying elements.In contrast to the measurements described above with the Berkovich tip, size effects were absent in the indentation yield strengths measured on other alloys [] using the recently developed spherical indentation stress-strain protocols. Therefore, the indentation yield strengths measured in the new spherical indentation protocols are better measures of the intrinsic material properties, and can be directly related to the details of the microstructure in the probed volume []. In this paper, we specifically investigated the changes in the indentation yield strengths in two different martensitic steels (with different carbon content) at different length scales. By systematically changing the length scale of the probed volume (accomplished by using indenters of different tip radii), we were able to identify the respective contributing defects (such as dislocations, interstitial carbon, grain boundaries, etc.) in the microstructure inside the indentation sample volumes probed in our experiments. Furthermore, the indentation results were suitably scaled and compared with the uniaxial mechanical properties (measured in standard tensile tests) to derive new insights on the correlations between the values measured at different length scales.Two non-commercial grade alloys with compositions Fe-5.1Ni-0.13C (wt.%) and Fe-5.0Ni-0.30C (wt.%) were provided by ArcelorMittal Research Center in Maizières (France). Both materials were austenitized at 900 °C for 5 min and water-quenched to obtain martensitic microstructures.Sample preparation was carried out using standard metallography procedures. This involved grinding with silicon carbide papers down to grade 4000 and polishing sequentially with diamond suspensions of 3 μm, 1 μm, and OP-A alumina/OP-S silica suspension. For microstructure characterization, EBSD (electron backscatter diffraction) and ECCI (electron channeling contrast imaging) measurements were carried out in a TESCAN MIRA3 and a Zeiss “Merlin” scanning electron microscope with a field emission gun at 20 kV and 30 kV, respectively. A working distance of 5–6 mm was used to increase the backscatter electron signal and enhance contrast for ECCI.A PANalytical's Empyrean X-ray diffractometer with Cu Kα radiation (k = 0.1540598 nm, powered at 45 kV and 40 mA) was employed for scans with the range of 2θ from 30 to 130° at the rate of 0.04°/min. In addition, higher resolution scans on the (200) and (211) diffraction peaks of martensite were obtained at a small step size of 0.0263° and an acquisition speed of 0.005°/min. The as-acquired diffraction curve was processed by Kα2 stripping and background removing.Tensile testing was performed using standard protocols on specimens with a rectangular cross section of about 4.1 × 0.9 mm2 and a gage length of 15 mm on a Zwick Roell machine at quasi-static strain rate (0.0001 s-1).Spherical nanoindentation tests were performed on an Agilent G200 Nanoindenter with a XP head and a CSM (continuous stiffness measurement) module. The CSM superimposes small unloads with a displacement amplitude of 2 nm and a frequency of 45 Hz on the monotonic loading conditions imposed on the sample. Tests were run with a constant strain rate (loading rate divided by the load) of 0.05 s−1 to maximum depths of 300, 500, and 700 nm for indenter tip sizes of 1, 16, and 100 μm, respectively. A minimum of 30 nanoindentation tests was conducted at random locations on the surface of each sample for each indenter size. The collected data was converted to indentation stress-strain in the following.] provides the relationship between the indentation load (P) and the elastic indentation depth (he) based on the effective isotropic modulus (Eeff) and the effective radius (Reff):P=43EeffReffhe3⋅1Eeff=1−vi2Ei+1−vs2Es⋅1Reff=1Ri+1Rs⋅Eeff=S2aEi, vi and Es, vs are Young's modulus and Poisson ratio for the indenter and the sample, respectively. S is the harmonic contact stiffness and a is the radius of the contact area, both of which evolve continuously with increasing indentation load/depth. The analysis contains two steps: i) accurately finding the effective point of the initial contact (zero-point correction) which is crucial in order to deal with any artifacts at the initial contact caused by imperfections in indenter shape or unavoidable surface conditions such as oxide layer and roughness, and ii) extracting indentation stress-strain curves from the corrected data. The indentation stress σind and the indentation strain εind are defined as [where ht denotes the total indentation depth. The contact radius a is estimated using the continuous stiffness measurement (denoted as S in Eq. ), while assuming that Eeff remains constant and has been estimated from the initial elastic loading segment [From each measured indentation stress-strain curve, the indentation elastic modulus, the indentation yield strength, and the indentation work hardening rate were extracted. A 0.002 plastic strain offset was employed to identify the indentation yield strength. The indentation work hardening rate was extracted by fitting a line to the indentation stress-strain plot from the indentation yield point up to 0.015 indentation plastic strain.An overview of the hierarchical martensitic microstructure in both alloys is illustrated in , provides information about the crystallographic orientations of laths, blocks and packets as well as information on the prior austenite grain structure. However, EBSD only is not entirely able to resolve the low misoriented substructure within martensite blocks. Yet, electron channeling contrast imaging (ECCI) micrographs in (c and d) reveal the lath morphologies. This technique permits imaging of defects in the lattice by diffraction contrast, very similar to the two-beam condition technique used in conventional TEM (transmission electron microscopy) for imaging dislocations. The advantage of ECCI over TEM is that it can be carried out in the scanning electron microscope (SEM) on bulk samples with wider field of view and exhibits excellent capabilities in terms of defect imaging on the micro- and nano-scale []. Defects such as dislocations produce lattice distortion around the dislocation core that result in contrast changes. The intensity of this contrast depends on how the back-scattered electrons interact with the crystal lattice.ECCI confirms that both steels exhibit a complex and heterogeneous microstructure containing parallel stacked laths []. Lath morphologies were already studied in detail on the same material system [], and have shown existence of a few coarse laths in a microstructure dominated by fine laths (also see c and d). A brief summary of the microstructural analysis performed previously on the same materials [] is reproduced here for the convenience of the reader. Laths with a 2D projected size larger than 5 μm2 were considered as coarse laths. The average area fractions of coarse laths were reported as 8.8% and 3.9% for Fe-5.1Ni-0.13C and Fe-5.0Ni-0.30C, respectively. The maximum width of the coarse laths was measured to be 3.5 μm, while the width of the fine laths varies between 50 and 500 nm (with an average of 200 nm). The microstructure of the alloy with the higher carbon exhibited a finer lath size, attributed to the restricted martensitic transformation at the higher carbon content []. The prior austenite grain size is about 25 μm in both alloys, and therefore the hierarchy in terms of packets and blocks should be comparable between the two alloys. More details about microstructural defects, such as dislocations, within the lath martensite are revealed in the higher resolution ECCI micrographs shown in (e and f). The lattice defects appear with a brighter contrast than the matrix in the ECCI micrographs. The dislocation networks are observed throughout the entire microstructure. In addition, this qualitative analysis of the dislocation structure already suggests a higher dislocation density in the Fe-5.0Ni-0.30C alloy as compared with its lower carbon counterpart.The degree of lattice distortion as a result of the interstitial carbon atoms is studied by using x-ray diffraction (XRD). (g–i) shows the XRD results from the as-quenched samples, which include the peaks for martensite. In spite of the low amounts of alloying elements in the studied material, a small amount of retained austenite was detected in the spectrum. This has been observed in other studies [] by TEM where thin films of retained austenite between lath martensite were detected even after fast quenching. Higher resolution XRD scans were collected on 43–47° and 63–67° 2θ ranges for the (110) and (200) martensite peak positions, respectively. The total carbon content entrapped in interstitial lattice positions is not sufficient to display the splitting of the (200) peak due to tetragonality of martensite. Also, autotempering occurs simultaneously during quenching []. This means that not all carbon atoms remain trapped in the lattice, but also decorate and pin dislocations and eventually form cementite precipitates nucleating from dislocation cores []. In the higher carbon alloy, the formation of carbon-rich clusters and transition carbides is additionally driven by spinodal decomposition at even smaller length scales (within the dislocation network) []. In general, the high carbon alloy experiences less autotempering during quenching due to the lower Ms temperature, so that the fraction of interstitially dissolved carbon atoms is higher than in the low carbon alloy. The slight shifts of the (200) peak to the right and the (110) peak to the left are attributed to the expansion and contraction of the crystal lattice along those directions during martensitic transformation, which is enhanced in the high carbon alloy. This indicates tetragonal distortion of the bcc structure particularly in the Fe-5.0Ni-0.30C alloy due to the elevated interstitial carbon content.Examples of nanoindentation measurements obtained in this study using the 1, 16, and 100 μm indenter tip sizes are shown for both alloys in . As discussed earlier, the transition from elastic to plastic regimes is not obvious from the load-displacement plots shown in (a–c). The details of the load-displacement curves at the very early stage of loading are magnified and shown in (d–f). These plots reveal a discernible difference in the load-displacement responses for both alloys, even at small indentation depths. However, the exact point of initiation of plastic deformation is still not easily identified from these plots. The elastic to plastic transition becomes obvious only after the data is corrected (i.e., zero-point correction []) and converted to the indentation stress-strain curves shown in (g–i) using our recently developed protocols []. The elastic segments in the initial loading segments are highlighted in (d–f). The corresponding calculated values of the elastic moduli from these curves are also provided in the same figures. It is important to recognize that the very early segments of the measured load-displacement curves are excluded from the analyses as they do not match the Hertzian contact theory. This is because the contact in these very early segments is likely to be different from the ideal contact of smooth quadratic surfaces assumed in the Hertzian contact theory. Therefore, the concept of effective contact mentioned earlier is crucial for the proper analysis of the measured load-displacement curves. As mentioned earlier, the indentation yield strength is defined using a 0.002 plastic strain offset in these measurements, while the indentation work hardening rate is defined as the slope of a line fitted to the stress-strain data from the yield point to a plastic indentation strain of 0.015.Since the martensitic microstructure exhibits a high degree of heterogeneity in terms of local carbon distribution, grain boundary density, dislocation density, crystallographic orientations (see ]) at the scale of the indentation probe volumes, it becomes essential to conduct multiple measurements at randomly selected locations on the sample. This allows us to arrive at a statistically meaningful comparison and interpretation of the effects of carbon on the measured indentation properties. The results from 30 nanoindentation tests per indenter tip size on each alloy are presented in (a–c). There is a clear overall trend in these results indicating that the Fe-5.0Ni-0.30C alloy (red) exhibits a higher indentation yield strength than the Fe-5.1Ni-0.13C alloy (blue). Also, the variance in the results is larger with the smaller indenter tip sizes. This variance can be attributed to a number of factors in the local microstructure including differences in carbon content, dislocation density, boundary density, and the crystallographic orientations of the laths [The means and the standard deviations of the measured indentation elastic moduli (Eind), indentation yield strengths (Yind) and indentation work hardening rates (Hind) are summarized in . The averages are plotted against the estimated primary indentation probe volume in (d–f), where the primary indentation volume is estimated as a cylinder of diameter of 2a and the height of 2.4a []. Since the contact radius a evolves with increasing indentation depths, a separate estimation is carried out at each loading level (i.e. each indentation depth or load applied). Indeed, the contact radius is estimated as a part of computing the indentation stress and strain values using Eqs. (for further details of these computations the reader is referred to Refs []). The diameter of the primary indentation probed volume for 1, 16, and 100 μm indenter tip size was found to cover the range from 100 nm to 4 μm at the 0.002 offset yield point on the indentation stress-strain curves. At an indentation plastic strain of 0.015 (note that the data between yield and 0.015 strain level is used to measure Hind), the diameters of the probed volumes for the different indenter tip sizes cover the range from 140 nm to 11 μm. Clearly, these represent substantial ranges of microstructural features contained in the probed volumes which will be discussed in the following. (d–f) indicates that changing carbon content from 0.13% to 0.3% has a smaller influence on elastic modulus, but a larger influence on the indentation yield strength and indentation work hardening rate measured with all three indenter tip sizes. These increases are quantified and summarized in . The ratios of the indentation moduli, indentation yield strengths and indentation hardening rates for the two alloys were measured to be in the ranges of 1–1.1, 1.4–1.5, and 1.2–1.5, respectively. These observations can be explained by the fact that the elastic properties are dominated by characteristics of the atomic bonding, while the plastic properties are controlled by the various defects in the microstructure that include interstitial C atoms as well as carbon clusters or early stage transition carbides, dislocations and the low-angle lath and high-angle block boundaries. The higher dislocation density and interstitial carbon content, as well as the finer microstructure in terms of lath sizes in the alloy with the higher carbon content (see ]) are in line with the observed increases in the indentation yield strength and the indentation work hardening rates. is the relative insensitivity of the indenter tip size on the measured indentation yield strengths. As mentioned earlier, a number of prior studies in literature [] have reported increased hardness values with smaller indentation depths (and probe volumes). The results presented in this work show that the mean indentation yield strength obtained as 0.002 indentation plastic strain offset value is mostly insensitive to the indenter tip size. The very small decrease in the measured indentation yield strengths with the smaller indenter tips lies within the range of measurement variance. The measurements obtained in this work strongly suggest that the previously reported indentation size effect in literature (i.e., higher hardness at smaller indentation depths) is most likely a consequence of the analyses protocols employed in those studies. As mentioned earlier, most previous studies have reported hardness values (as opposed to the indentation yield strengths reported in this work) that correspond to non-standard values of imposed plastic deformation in the indentation experiment. (a and b) shows all indentation stress-strain plots for different indenter sizes for both alloys. This provides a visual overview of the variance of the nanoindentation data as well as the indenter size effect on the material responses. also reveals a significant influence of the indenter tip size on the work hardening rate. To obtain a better insight into this effect, one needs to examine these in the context of the length scales of the microstructural features inside the primary indentation zone. As discussed before, the primary indentation zone can be approximated as a cylinder with diameter and height of 2a and 2.4a, respectively []. The probed volumes for 1, 16, and 100 μm indenter tips at the indentation yield strength and 0.015 indentation plastic strain are schematically depicted on a representative ECCI micrograph of the martensitic microstructure for the Fe-5.0Ni-0.30C alloy, in c and d, respectively. This schematic illustrates that the primary indentation zone sizes at yield point and 0.015 plastic strain are typically smaller than the average lath thickness for the measurements with the 1 μm indenter tip size. This is because the indented zone size in these experiments is of the order of 112 ± 25 nm at yield (0.002 indentation plastic strain) and 146 ± 30 nm at 0.015 indentation plastic strain, while the average fine lath size is about 200 nm []. Therefore, the measurements with the 1 μm indenter tip are most likely from a single martensite lath, and in some cases might have included a single most probably low-angle lath boundary in the indented zone. The diameter of the indentation probed volume for the 16 μm indenter tip changes from 739 ± 87 nm to 1554 ± 238 nm, which is still in the order of the average lath size (including fine and coarse laths) and possibly includes a handful of low angle boundaries. In other words, the probed material volumes in the measurements with both 1 and 16 μm indenter tips are mostly influenced by a small number of interfaces with mostly low misorientations. Therefore, it is not surprising that the averaged mechanical responses for both indenter sizes are very similar to each other, although the measurements with the 1 μm indenter tip exhibited a higher variance. On the other hand, the measurements obtained with the 100 μm indenter tip (shown in magenta in ) show a significantly higher strain hardening rate (see also ). It should be noted that the 100 μm indenters probed a volume which is larger than the average martensite block size (see d). The probed material volumes with the 100 μm indenter tip corresponded to length scales of about 4 μm at indentation yield to about 10 μm at an indentation plastic strain of 0.015. At these length scales, the indentation primary zone is likely to contain several high angle boundaries which are effective in impeding dislocation transmission []. It should also be noted that the high angle grain boundaries are likely to serve as potent dislocation sources (cf []) and can contribute in a very effective manner to the higher hardening rates measured with the larger indenter tips. Dislocation density and carbon distribution, however, equally affect indentation measurements at all scales studied, as the dislocation cell size and carbon network size are significantly smaller than all probed volumes.Another important observation in this study is the presence of pop-ins in some of the indentation tests. Pop-in events have been reported previously [] in load-control indentation tests, defined by a sudden jump in the displacement at roughly constant load. In prior studies using the same spherical indentation protocols used here, pop-ins were observed only in the fully-annealed metal samples studied with the smallest indenters []. They disappeared when the indenter tip size was increased or when the samples were given small amounts of plastic deformation []. As a result, the pop-ins observed in these studies were attributed to the activation of dislocation sources in the primary indentation zone []. However, the mechanism behind pop-ins observed in the present study must be very different, because lath martensite exhibits a significant density of pre-existing dislocations (see Another type of pop-in reported in literature is the grain boundary pop-in observed in indentation tests conducted near grain boundaries []. It is generally believed that these are caused by the pile-up of the dislocations produced in the primary indentation zone at the grain boundaries and their eventual transmission through the grain boundary. However, these grain boundary pop-ins were never observed in any of the previously reported spherical indentation measurements.In the present study, pop-ins were observed in the measurements conducted with the 1 μm and 16 μm spherical indenter tips. Typical examples are shown in . These are identified as displacement bursts exceeding 1.5 nm. Interestingly, these pop-ins are distinctly different from those reported in the earlier studies on the fully annealed samples []. First, the pop-ins observed in the present study were significantly smaller. For example, the pop-ins observed in the fully annealed samples exhibited displacement bursts of 10–150 nm and 5–40 nm in the tests conducted with the 1 μm and 16 μm spherical indenter tips, respectively. However, the pop-ins observed in the present study exhibited displacement bursts of 1–4.5 nm and 1.5–7.5 nm in the tests conducted with the 1 μm and 16 μm spherical indenter tips, respectively. Not only are the displacement bursts significantly smaller, they also exhibited a completely different trend with the increase in the indenter tip size. The displacement bursts in the current study increase with indenter tip size, while the opposite was observed in the previous study. Second, the pop-ins in the 1 μm spherical indentation tests were observed after a significant amount of plastic strain was introduced in the primary indentation zone. These observations clearly suggest that the pop-ins are not caused by the lack of dislocation sources.Instead, the root cause for pop-ins seems related to the effect of grain boundaries with dislocation pile-up and eventual transmission at the lath interfaces or block boundaries. The contact radius and the indentation stress are extracted at each pop-in event in the tests conducted with the 1 μm and 16 μm indenter tips and plotted in . The contact radius associated with the pop-ins corresponds well with the average lath thickness. The dislocations are expected to interact with the dislocation arrays along lath boundaries before they are transmitted into in the neighboring lath. The range in displacement bursts between 1 and 7.5 nm might be related to the difference in dislocation transmission across high- and low-angle boundaries. High-angle block boundaries effectively hinder dislocation transmission due to the mismatch of adjacent slip planes, while low angle boundaries present a lesser obstacle to dislocation transfer [ is that the indentation stress for the pop-in events is higher for the tests with the 1 μm indenter tip compared to the 16 μm indenter tip. This can be explained by the fact that the stress gradient in the tests with the 1 μm indenter tip is significantly sharper compared to the tests with the 16 μm indenter tip. In other words, the stress fields with the smaller indenter decay extremely fast and consequently need to rise to higher levels before successfully transmitting the dislocations across the lath boundaries. It is also worth noting that pop-ins were not found in the tests conducted with the 100 μm indenter tip. It is reasonable that pop-ins disappeared in the indentation tests with the 100 μm indenter tip as one should expect a multitude of pop-in events occurring continuously in different parts of the indentation primary zone. The measured load-displacement curve simply reflects an averaged (smoother) response, where the individual pop-ins are no longer discernible. All of the observed pop-ins in our tests are therefore consistent with the notion of grain boundary pop-ins.In addition to nanoindentation tests, uniaxial stress-strain measurements were obtained for both alloys from standard tensile tests which are presented in a. Two tensile tests were conducted for each alloy. The Fe-5.1Ni-0.13C alloy exhibits an average uniaxial yield strength of 1029 MPa, while the corresponding value for the Fe-5.0Ni-0.30C alloy is 1319 MPa (a). To maintain consistency with the indentation measurements, the work hardening values are extracted from the early parts of the stress-strain curves. Using the stress-strain curves after the yield point and up to 0.015 plastic strain results in work hardening rates of 22.8 GPa and 32.6 GPa for Fe-5.1Ni-0.13C and Fe-5.0Ni-0.30C, respectively. The ratios of the yield strengths and the work hardening rates for the two alloys (expressed as values for Fe-5.0Ni-0.30C over the values for Fe-5.1Ni-0.13C) are 1.3 and 1.4, respectively. Note that these ratios are in excellent agreement to the corresponding ratios obtained from nanoindentation data (see ). This agreement in the ratios of the measured properties proves the accuracy and reliability of the spherical indentation stress-strain protocols employed in this work.It should be clear that the uniaxial stress-strain data cannot be compared directly with the indentation stress-strain data, as these impose very different plastic deformation fields in the sample. In a recent work based on finite element simulations employing isotropic plasticity models, Patel et al. [] suggested scaling factors of 2.2, 2.0, and 1.3 for the stress, elastic strain, and plastic strain, respectively. These factors were used to scale the measured tensile stress-strain curves in a to indentation stress-strain curves shown in b. The indentation stress-strain curves presented in b can be interpreted as data from indentation measurements in substantially large indentations, within the assumptions of an isotropic plasticity model for the effective material response. The scaled tensile stress-strain curves are superimposed on the indentation measurements in a and b. It is seen that the scaled indentation stress-strain responses from b are in reasonable agreement with the nanoindentation measurements for both alloys, at least up to small plastic strains.The scaled indentation yield strengths extracted from b are 2090 and 2612 MPa for the Fe-5.1Ni-0.13C and the Fe-5.0Ni-0.30C alloys, respectively, and are 22% and 3% higher than the indentation yield values measured with the 100 μm indenter tip for the Fe-5.1Ni-0.13C and the Fe-5.0Ni-0.30C alloys, respectively. The scaled work hardening rates extracted from b for the two alloys are 43.3 and 68.2 GPa, which are very close to the average values measured with the 1 μm and 16 μm indenter tips. However, the hardening rates measured with the 100 μm indenter tip are significantly higher compared to those measured in tension.The comparisons presented above between the properties measured in indentation and tension tests provide several key insights. They demonstrate good agreement for the measured yield strengths and elastic moduli in the different testing modes. To this end, the indentation protocols developed and presented in this paper provide a new set of reliable and robust tools to assess mechanical properties of hierarchical microstructures such as lath martensite in steels. Yet, it is clear that the hardening rates measured with the 100 μm indenter tip are significantly higher compared to those measured with tensile tests. There could be a number of reasons for this: (i) the deformation imposed in indentation is indeed highly heterogeneous and exhibits strong gradients. Consequently, it is possible that the indentation hardening rates will always be higher than those measured in tension. (ii) the tensile tests were conducted with a nonstandard sample geometry where the gage length/width ratio was slightly less than 4.0. This might have introduced a small inaccuracy in the measured tensile stress-strain curve. (iii) Tensile stress states might promote early damage initiation and result in a lower hardening rate compared to indentation tests with a significantly higher negative hydrostatic stress component. (iv) The discrepancy in work hardening rates measured with indentation and tensile testing shows that the rather simplistic scaling approach cannot sufficiently account for the complex plastic response of lath martensite at the length scale of multiple prior austenite grains that was previously analyzed by the authors [The mechanical properties of lath martensite in Fe-5.1Ni-0.13C (wt.%) and Fe-5.0Ni-0.30C (wt.%) alloys were systematically investigated at multiple length scales with spherical nanoindentation stress-strain protocols as well as standard tensile tests. Consistent results from the indentation measurements with indenter tip radii ranging between 1 μm and 100 μm as well as from the standard macroscale tensile tests attest to the reliability of the applied indentation protocols for studying hierarchical microstructures such as martensitic steels. The results provided reliable data on the indentation yield strength of lath martensite as a function of carbon content with only 3% and 22% deviations compared with tensile testing of the Fe-5.1Ni-0.13C (wt.%) and Fe-5.0Ni-0.30C (wt.%) alloy, respectively. The work hardening rates measured in indentation tests with the 100 μm indenter tips are significantly higher than those measured with the 1 μm and 16 μm indenter tips, as well as those measured in tension tests. The discrepancy in work hardening measured with different indenter tip sizes is attributed to the presence of high angle block boundaries in the 100 μm indenter tip probed volumes. The discrepancy between values obtained by 100 μm indentation and tensile tests is regarded as an inherent limitation due to the microscopic probe volume compared with macroscopic tensile samples. Yet, both indentation and tensile results consistently showed that increasing the carbon content from 0.13 to 0.3 wt% increased the yield strength by ∼42–48% and the work hardening by ∼27–47%, while the elastic modulus showed a small increase of only ∼5%.The level of consistency achieved in this study suggests that the indentation size effects on indentation yield strengths reported in prior literature are largely a consequence of the analysis protocols employed in those studies. The higher spread in the indentation data at lower length scales is attributed to the heterogeneity of the microstructure in terms of lath size, dislocation density and carbon distribution. Therefore, a sufficiently large indenter tip is required to obtain the bulk response from the indentation measurements.Effect of vanadium on the high-cycle fatigue fracture properties of medium-carbon microalloyed steel for fracture splitting connecting rodThe present investigation effort was made to study the effect of V up to 0.45% on the high-cycle fatigue properties of medium-carbon microalloyed (MA) steel 37MnSiVS, for the development of new crackable MA forging steel with excellent fatigue properties. The results show that the amount of V(C,N) precipitates increases with increasing V content and most of the precipitates are less than 5 nm. Owing to the significant precipitation strengthening effect of these nanosized particles, the hardness increase of ferrite with increasing V content is higher than that of pearlite and accordingly a decrease of pearlite/ferrite hardness ratio. Therefore, both fatigue strength and fatigue strength ratio increase with increasing V content and excellent fatigue properties could be obtained when V content is higher than about 0.28%. The fatigue crack growth (FCG) behavior is similar for all the three 37MnSiVS samples with an exponent m
≈ 3.5. It is concluded that V can improve the fatigue properties of ferrite–pearlite steel mainly through precipitation strengthening and therefore it is anticipated that MA steel’s fatigue property could be further improved as well as more fine V(C,N) particles be obtained.In today’s automotive industry products must meet increasingly higher performance requirements while the production cost must be increasing lower. The connecting rod is one of the engine’s core components and its quality and processing technology has been gaining great intention. The fracture splitting technology of connecting rod is an innovative processing technique that was developed in the 1990s In general, the material of connecting rod is a major factor that influences the fracture splitting process. The material not only affects connecting rod’s mechanical properties such as rigidity, hardness, tensile and fatigue strength, but also directly influences fracture splitting ability and cleavage surface quality. The material suitable for fracture splitting connecting rod should have the following properties: (1) little deformation in fracture splitting; (2) good intensity; (3) proper brittleness; (4) good machinability It is well known that fatigue strength is the most significant factor (i.e. design driving factor) in the design and optimization of the connecting rod One effective method to gain lower ductility is to strengthen the soft phase of ferrite through solid solution strengthening and precipitation strengthening, such as increasing the content of Si, P and V elements The steels with different content of V, which were designated as V1, V2 and V3, were prepared in a 200-kg vacuum-induction furnace, and the ingots were heated to 1200–1220 °C for at least 1 h and then forged to rods with diameter of 18 mm and plates of 25 mm thickness and 70 mm width. The finish forging temperature was about 850–900 °C and then still air cooled. Commercial grade of conventional medium-carbon MA steel 38MnVS was also used in the form of 90 mm diameter round bar for comparison. The 90 mm round bar was reheated and forged to rods and plates as those of the above mentioned. The chemical compositions of the tested steels are given in ), which were used to evaluate the fatigue strength in the rotating bar two-point bending fatigue tests, were prepared from the rod. Standard compact tension (C(T)) specimens for the fatigue crack growth (FCG) tests were machined from the plate in the L–T orientation and with a geometry shown in The surfaces of all the fatigue specimens were polished in the axial direction using No. 1000 abrasive papers after final finishing. Fatigue tests were conducted up to 107
cycles at different stress amplitudes at stress ratio R
= −1 using PQ1-6 type rotating bar two-point bending fatigue testing machine. The rate of the stress cycling was 5000 rpm, and the tests were carried out in ambient laboratory atmosphere. The fatigue strength was figured out by the staircase method of at least six pairs in order to raise the confidence. The FCG tests were carried out on a MTS-880 universal testing machine with a frequency of 20 Hz. Before the test, 2–3 mm-long precrack was prepared at the root of the notch. The FCG rate da/dN was determined under R
= 0.1. First, the relationship of the fatigue crack length a to the cycle numbers Nf was measured, then the data were treated to obtain the da/dN–ΔK curve.Optical microscope (OLYMPUS GX51) and scanning electron microscope (SEM, S-4300) were used for microstructural characterization. The specimens were etched with 3% nital solution, and the volume fraction of ferrite and pearlite interlamellar spacing were measured by using SISC IAS V8.0 software. Vickers hardness of the specimens were measured with a 5 kg load and Vickers microhardness of both the ferrite portion and the pearlite portion were also measured separately with a 5 g load, and the results were the average of at least 10 measurements. After fatigue tests, fracture surfaces were examined on an S-4300 type SEM.The specimens for transmission electron microscope (TEM) were sliced into 0.5 mm thick plate and subsequently ground down to a thickness of about 50 μm. These foils were finally electropolished in a twin-jet electropolishing apparatus using a standard chromium trioxide-acetic acid solution. Thin foils were examined in an H-800 type TEM at an operating voltage of 200 kV to study precipitates. Physical–chemical phase analysis for precipitates was used to determining the amount of precipitation phase as well as the distribution of microalloying elements between the solid solution and the precipitation. A specimen of round bar with diameter of 6 mm and length of 100 mm was first undergone electrolytically extracted and then the extracted particles were identified using Philip APD-10 X-ray diffraction instrument. The particle size distribution of microalloying carbonitrides was determined using the small angle X-ray scattering method that can detect the particles within the range of 1–300 nm. More details about the experimental process see Ref. show the optical and TEM micrographs of the tested steels as-forged and summarizes the microstructural parameters and hardness of the steels. It is clear that the microstructures consist of polygonal pro-eutectoid ferrite and pearlite. With the increase of V content, the volume fraction of ferrite increases and the microstructure becomes finer and more uniform. These effects are generally associated with the influence of V on the formation of fine vanadium carbonitride V(C,N) particles (precipitation pining effect) and the suppression of grain boundary migration by being dissolved in the austenite (solute-drag effect) , the increase of V content also reduces the pearlite interlamellar spacing. This result can be related to the influence of V on transformation temperature. As the solution of V in austenite could enhance the stability of austenite during cooling, and thus lowers the transformation temperature of austenite to pearlite. In general, the slow diffusivity at low temperature reduces the diffusion distance and consequently reduces the pearlite interlamellar spacing, and it was confirmed that interlamellar spacing is dependent on transformation temperature Also, according to the TEM observations (), two different precipitations such as random precipitation and interphase precipitation were identified, whereas most of the precipitations were random precipitated within pre-eutectoid ferrites and pearlitic ferrites (a, b, and d). Selected area diffraction pattern and energy dispersive spectroscopy (EDS) analyses of the precipitates revealed that the precipitate was V(C,N). The amount of V(C,N) particles increases with increasing V content., although the volume fraction of ferrite increases with increasing V content, both hardness and strength increase with increasing V content mainly due to the precipitation strengthening effect of fine V(C,N) particles shows the S–N curves for the tested steels. The data in each S–N curve could be interpreted with two straight lines. The horizontal line represents the fatigue strength at 107
cycles and the resulting values are summarized in . The data of conventional structural steel 40Cr in the quenched and high temperature tempered condition (40Cr-QT) and crackable high-carbon forging steel C70S6 were also given in the table , both fatigue strength and fatigue strength ratio increase with increasing V content for the four MA steels and which are much higher than these of the 40Cr-QT and C70S6 steels. Also, in general, the S–N curve tends to shift to longer life and higher stress with increasing V content.After the fatigue tests, the fracture surface of every failed specimen was carefully examined by using SEM in order to investigate the fracture initiation site. For the tested MA steels, all the fractures were originated from specimen surface except one which was initiated from subsurface inclusion. However, for the 40Cr-QT steel, half of its fracture origins were subsurface inclusions, the other half were surface matrix shows example of typical SEM micrographs of V3 steel. As shown in the figures, initiation of cracking happened in the surface of the specimen, and propagation of the fatigue crack was predominant through the quasi-cleavage fracture mechanism. This implies that fatigue crack initiation is not controlled by inclusion for the four MA steels and microstructure plays a controlling role. shows the curves of fatigue crack growth rate versus the applied stress intensity factor range (da/dN–ΔK curves) of the tested steels at stress ratio R
= 0.1. It could be seen that at lower values of ΔK, the difference of da/dN is not significant, whereas at higher values of ΔK, there is a little increase of da/dN with increasing V content. are mainly Stage II of fatigue crack growth and could be expressed by the Paris equation:where c and m are empirical constants that depend upon the material. Through a least squares fitting procedure of the data in , the Paris equations for the tested steels could be obtained as follows:SteelV1:dadN=1.01×10-14(ΔK)3.47,R2=0.9862SteelV2:dadN=5.40×10-15(ΔK)3.59,R2=0.9894SteelV3:dadN=1.57×10-15(ΔK)3.81,R2=0.9772The FCG behavior is similar for all the three tested steels with an exponent m
≈ 3.5. This consistent with the results available in literature, where also the FCG in the Paris regime has been shown to be independent of the microstructure and the mechanical properties For conventional ferrite–pearlite steel, the hardness of ferrite is much lower than that of pearlite and thus deformation is mainly confined to ferrite under cyclic loading. Therefore, fatigue crack prefers initiating along ferrite/pearlite boundaries. Careful examination of the surface of specimens undertaken various number of cycles prior to failure had confirmed this ), whereas for 40Cr-QT steel with same strength level, which does not possess soft phase of ferrite, the fatigue cracks easily initiated at coarse subsurface inclusions shows the variations of strength and fatigue strength with microstructural parameters of the tested steels. As can be seen, fatigue strength is significantly affected by the ferrite hardness and there exits a linear relationship between them (a). This suggests that ferrite strengthening is the most important factor in the improvement of fatigue properties. As for ferrite–pearlite steel, like that of dual phase steel, the relationship between its strength and microstructure could by expressed by the following equation (Therefore, fatigue strength ratio σ−1/Rm may be proportional to HF/(HFVF
+
HPVP) (Finally, the following equation could be obtained σ-1/Rm∝HF/(HFVF+HPVP)∝1/[VF+(1-VF)Hp/HF]The above equations indicate that fatigue strength ratio could be increased by the following methods: (a) ferrite strengthening; (b) decreasing the pearlite/ferrite hardness ratio; and (c) increasing the ferrite volume fraction. This could be verified by the results of the tested steels in Previous studies has revealed that V has most significant effect on improving the fatigue properties of ferrite–pearlite steels shows the variations of hardness and fatigue strength ratio with V content of the tested steels and other similar steels in literatures. Obviously, the amount of fine V(C,N) particles and thus precipitation strengthening increase with increasing V content (). The hardness increase of ferrite with increasing V content is higher than that of pearlite and accordingly a decrease of pearlite/ferrite hardness ratio HP/HF. Therefore, according to Eq. , fatigue strength ratio increases with increasing V content. However, this increasing tendency tends to slow down for higher V content.Further physical–chemical phase analysis of the size distributions of V(C,N) particles shows that more than 95% of the particles are less than 5 nm, whereas only less than 1% of the particles are larger than 10 nm (see ). It has been confirmed that the critical size of V(C,N) particle maintaining coherent and semi-coherent with ferrite matrix are 5.2 nm and 14.5 nm, respectively Physical–chemical phase analysis of the tested steels shows that only 48–64% of total V is in the V(C,N) precipitates, the others is in the solution condition (see ). Similar result was also obtained in Nb–V microalloyed medium-carbon steel Generally speaking, strength has a significant effect on fatigue properties of steels. It is well known that there is a good correlation between rotating bending fatigue strength, σ−1, of smooth specimens and tensile strength, Rm, for low or medium strength steels (Rm less than about 1200 MPa) as follows In this case, fatigue cracks tend to initiate from specimen surface and therefore is termed as surface fracture. As mentioned above, though nearly all the fatigue fractures were originated from the specimen surface for the tested MA steels, and their microstructures are all ferrite–pearlite, just like those of normalized carbon steels, their fatigue properties are much superior than those of the later, even higher than that of QT steel (). Obviously, this is mainly related to the beneficial effects of V as mentioned above. summaries the fatigue data of low or medium strength steels in literatures The present investigation effort was made to study the effect of even higher addition of V up to 0.45% on the fatigue properties of medium-carbon MA steel, for the development of new crackable MA steel to fabricate fracture splitting connecting rod with excellent fatigue properties. 37MnSiVS steel with three levels of V (0.15%, 0.28% and 0.45%) and conventional medium-carbon MA steel 38MnVS for comparison in the as-forged condition were used. The main conclusions are:With the increase of V content, the volume fraction of ferrite increases, the pearlite interlamellar spacing decreases and the microstructure becomes finer and more uniform.The amount of V(C,N) particles increases with increasing V content. Physical–chemical phase analysis shows that only 48–64% of total V is in the V(C,N) precipitates, the others is in the solution condition. The size distribution analysis of V(C,N) particles shows that more than 95% of the particles are less than 5 nm, whereas only less than 1% of the particles are larger than 10 nm.Both the hardness and strength increase with increasing V content mainly due to the precipitation strengthening effect of fine V(C,N) particles. The hardness increase of ferrite with increasing V content is higher than that of pearlite and accordingly a decrease of pearlite/ferrite hardness ratio.Both fatigue strength and fatigue strength ratio increase with increasing V content for the four MA steels and which are much higher than those of the 40Cr-QT and C70S6 steels. Also, the S–N curve tends to shift to longer life and higher stress with increasing V content. This increasing tendency tends to slow down for higher V content and it is suggested that MA steel’s fatigue strength could even higher as well as more fine V(C,N) particles be obtained.All the fatigue fractures were originated from specimen surface except one which was initiated from subsurface inclusion and propagation of fatigue crack was predominant through the quasi-cleavage fracture mechanism. This implies that fatigue crack initiation is not controlled by inclusion for the four MA steels and microstructure plays a controlling role.The FCG behavior is similar for all the three tested steels with an exponent m
≈ 3.5. At lower values of ΔK, the difference of da/dN is not significant, whereas at higher values of ΔK, there is a little increase of da/dN with increasing V content.Effect of microstructure on strain localization in a 7050 aluminum alloy: Comparison of experiments and modeling for various texturesMicrostructure attributes are responsible for heterogeneous deformation and strain localization. In this study, the relation between residual strain fields and microstructure is examined and assessed by means of experiments and crystal plasticity modeling. The microstructure of rolled aluminum alloys (AA) in the 7050-T7451 condition was experimentally obtained with electron backscatter diffraction (EBSD) analysis along the rolling direction (L-T orientation), across the rolling direction (T-L orientation), and transverse to the rolling direction (T-S orientation). Each of these sections was also patterned using a novel microstamping procedure, to allow for strain mapping by digital image correlation (DIC). The measured microstructures were in turn used as input of an elasto-viscoplastic crystal plasticity formulation based on fast Fourier transforms (EVP-FFT). Comparisons between the strain maps obtained experimentally by the concurrent DIC-EBSD method and the EVP-FFT simulations were made for the three sections, corresponding to the initial textures. The comparisons showed that the predicted levels of strain concentration were reasonable for all three specimens from a statistical perspective, which is important to properly describe and predict the strains within an ensemble of components; however the spatial match with the actual strain fields needs improvement.Aluminum alloys play an important role in the modern transportation industry, due to their combination of weight, strength, ease of manufacture, and environmental resistance. In worldwide aviation, aluminum alloys are present in more than two thirds of the plane's dry weight, still being the preferred material for an aircraft's primary structures. It means that the majority of the load carrying components and fatigue critical locations are made of this material, which are frequently stressed in multiple directions and under complex loading conditions. Component failure is a result of deformation accumulating in small regions within a part. In fact, strain localization is a precursor to material failure. Understanding the strain localization and the role of microstructure, e.g. grain's orientation and boundaries, on the strain energy accumulation is a key factor for improving the application of such materials under aggressive loading and environments. In this paper, the investigation of strain localization is enabled through combined experimental analysis and material's modeling, the outcomes of which are explored and compared.In polycrystalline materials, the microstructure attributes are responsible for heterogeneous deformation. The presence of grains and grain boundaries tends to localize deformation. Digital image correlation (DIC) has become a valuable technique to study local strain in materials and components through non-contact/non-destructive analysis. Additionally, electron backscatter diffraction (EBSD) is the predominant technique to identify spatial maps of local grain orientations. In recent years, microstructural information has been coupled with local strain maps by means of concurrent DIC-EBSD. For example, Tschopp et al. performed in-situ strain mapping in a scanning electron microscope (SEM) of a Ni-base superalloy, Rene 88DT Elasto Viscoplastic crystal Plasticity (EVP) links the applied macroscopic load and micro-mechanical response, accounting for slip activation. Experimental strain maps have been recently compared with crystal plasticity simulations. Using an oligocrystal aluminum sample, Zhao et al. suggested that grain topology and micro-texture have significant influence on the origin of strain heterogeneity A plate of 7050 aluminum alloy (AA) received in the T7451 condition with nominal composition . All the specimens were 1.6 mm thick and machined 6.4 mm away from the plate surfaces to avoid the excessive effect of the rolling process. The specimen geometry was adapted from the ASTM E8 All tension experiments were conducted at room temperature (23 °C), following the basic procedures described in . For placing the markings on the specimen, the automated LECO Microhardness Tester LM247AT was used. The two center indents were obtained with 1 N of indentation force, and the four smaller indents in each side defining the areas of interest were obtained with 0.5 N. Each area of interest was defined by fiducial markings in a rectangle of 800 μm by 600 μm.An FEI Philips XL-40 SEM was used for EBSD characterization. The average grain size was 80 µm, and typically ranging from 30 to 500 µm, depending on the specimen orientation with respect to the alloy rolling direction. For DIC patterning, the specimens were stamped using a novel micro stamp, manufactured by 1900 Engineering LLC, designed for the DIC reference patterning The following protocol briefly described by Cannon et al. ; this procedure was adopted to stamp all the AA 7050-T7451 specimens. If the pattern is not good enough over the area of interest, the stamping process has to be repeated beginning by thoroughly cleaning the specimen and stamp by repeating Step 1. The large dark spots regularly spaced, shown in , are the fiducial markings, as depicted in . The smaller dark spots are precipitates that can be seen through the Shipley photo resist stamp.Specimens with three different orientations from the rolling direction were loaded up to rupture to determine mechanical properties and to obtain the stress-strain curves for the crystal plasticity model parameter calibration. All tension tests were performed in a 6.7 kN electromechanical Mark-10 ESM-1500 Force Test Stand. The force indicator has a ±0.1% of full scale accuracy with a resolution of 5 N. The cross head displacement has a travel resolution of 0.02 mm, and the tests were conducted at 2 mm/min. A dedicated Epsilon extensometer Model 3542 was used to measure strain. Six specimens were tested under tension, two for each direction (L-T, T-L, T-S as indicated in ), with the stress-strain results shown in . As can be seen, the mechanical properties obtained for AA 7050-T7451 specimens are very similar in the three tested directions. The average yield stress for the L-T specimen is slightly greater than the ones for the T-L and T-S specimens. The other difference seen in these experiments that may be considered beyond the typical scatter is the final elongation of the L-T specimens that is smaller than the other two tested directions. This can be explained by the fact that the material was previously plastically deformed in the longitudinal direction during the rolling process to comply with the T7451 condition, overcoming some of its total possible elongation. On the other hand, this process increases the yield stress and it is well known to also increase the fracture toughness for this direction. The mechanical properties for the three textured samples are summarized in An elasto-viscoplastic EVP-FFT formulation was used to model the behavior of FCC polycrystals under uniaxial loading ε̇pl(x,σ)=γ0̇∑α=1NMα(x)(|Mα(x):σ(x)|τ0α(x))nsgn(Mα(x):σ(x))where γ0̇ is the reference shear rate, τ0α(x) is the Critical Resolved Shear Stress (CRSS), which gets incrementally updated due to strain-hardening, n is the stress exponent, Mα is the Schmid Tensor, and N is the total number of active slip systems with each slip system denoted with an index of α.Considering elasto-viscoplastic behavior and using an Euler implicit time discretization scheme and Hooke's law, the stress in material point x and time t+∆t (at which, unless otherwise noted, all fields are evaluated) becomes:with the supraindex t indicating field values evaluated at time t. Here σ(x) is the Cauchy stress tensor, C(x) is the elastic stiffness tensor; ε(x), εel(x), εpl(x) are the total, elastic and plastic strain tensors, and ε̇pl(x) is the plastic strain-rate tensor given by Adding and subtracting from the stress tensor an appropriate expression involving the stiffness of the reference linear medium Cijklo gives:σij(x)=σij(x)+Cijklouk,l(x)−Cijklouk,l(x)where uk,l(x) is the displacement gradient tensor. can be rearranged to give the stress tensor aswith εk,l(x)=(ukl(x)+ukl(x))/2, and where the polarization field φij(x) is given by:Solving this partial differential equation (PDE) in a periodic unit cell under applied strain E=〈ε(x)〉, the Green's function method's auxiliary PDE reads:where Gkm(x−x′) is the Green's function associated with the displacement field uk(x). The displacement gradient can then be obtained as a convolution in real space:Integrating by parts and assuming that the boundary terms vanish: can be solved in the Fourier space as a product instead of an integral, thus shortening the computational time required for the simulation, i.e.where the Green operator is defined as Γijkl=Gik,jl. The strain field becomes:The symbol “^” indicates a Fourier transform and ξ is a frequency of Fourier space. The Green operator in Fourier space, which is only a function of the reference stiffness tensor Cijklo and the frequency ξ, is given by: and a nonlinear system of equations is solved at every point to obtain a new guess for the stress field. With these new values of the micromechanical fields, the polarization is then iteratively updated (see ), and an augmented Lagrangian scheme is used to obtain, at the end of this iterative process, a compatible strain and an equilibrated stress fields The EVP-FFT formulation allows the implementation of different microscopic hardening laws without the need of changing the algorithm. The constitutive relationship used in this particular modeling was the Generalized Voce Hardening Law, which states that:where Γ(x)=Γt(x)+∑α=1N|γ̇α(x)|∆t is the accumulated plastic shear at material point x, τ0 and θ0 are the initial yield stress and hardening rate, respectively, and τ1 and θ1 are the parameters that describe the asymptotic behavior of the material. This hardening law is isotropic; i.e., each slip system hardens at the same rate.An initial guess of the parameters was obtained from the macroscopic stress-strain curves, as there is a direct relationship between the microscopic and macroscopic curves through the average Taylor factor of the polycrystal, assuming this factor as constant during deformation ), which corresponds to the T-L grain orientations. The final values were then obtained through manual fitting so that one set of parameters would account for all stress-strain curves. The microstructure was reconstructed from the EBSD scans by using the software Dream3D The method used for mapping the strain field was ex-situ digital image correlation (DIC), which consisted of characterizing the grain orientation within the areas of interest through EBSD and stamping each specimen according to the protocol described in . The L-T, T-L, and T-S specimens were loaded to a prescribed strain level of approximately 3%, then unloaded and removed from the tensile load frame for the analysis. The strain mapping was based on the residual strain field after plastic deformation. The DIC technique has proven to be an appropriate method for full-field strain measurements shows the inverse pole figures for the specimens L-T, T-L, and T-S, respectively. For all the images, the horizontal direction represents the load direction in the specimen coordinate system. As expected, the L-T, T-L, and T-S specimens have a predominance of elongated grains in the horizontal, vertical, and out of the plane directions, respectively. For the T-S specimen, this conclusion comes from relatively smaller grains seen in the IP figure compared with the other two directions, resulted from the grain elongation perpendicular to the plane of the image. The average grain sizes for each scan are 88, 79, and 59 µm for the specimens L-T, T-L, and T-S, respectively. The Taylor factor analysis for each of the scanned materials were ~2.44, thus displaying texture due to rolling.Due to the substantial amount of second phase particles in AA 7050-T7451, caused by its overaged condition, some small areas in the inverse pole figure are just noise, since the EBSD does not account for precipitates. This noise was removed with standard filters available through the EBSD software package. The color map of the inverse pole figure does not represent the actual aluminum crystal orientation under these agglomerates, which are more likely to be a combination of MgZn2, AI7Cu2Fe, Al2CuMg, and Mg2Si For all illustrations, the axial strain is shown in the horizontal direction. After achieving the imposed maximum strain, the specimen was unloaded, remaining with an overall plastic strain. The Correlated Solutions Software Vic-2D was used to perform the DIC in-plane measurements on the delimited area show the in-plane strain maps for the three specimens (L-T, T-L, and T-S, respectively), with the actual axial, transverse and shear strains obtained with DIC analysis.It is important to note that the localized strain showed in each picture represents the residual strain upon unloading. This should not be confused with the residual elastic strains, normally obtained by diffraction techniques measuring lattice spacing. In the macroscale stress-strain relationship, the overall residual plastic strain is defined as the plastic strain; in the microscale, the localized residual plastic strain is the result of local accommodation of strain upon unloading, due to the anisotropy of the elastic tensor at crystal level and to the plastic anisotropy related to the specific orientation of the slip systems. A consequence of this crystal-level anisotropy is that the associated mechanical response such as crystallographic and morphologic texture, strength, strain hardening, deformation-induced surface roughening and damage are also orientation dependent The final total strain measured for the L-T specimen was 3.11%, reaching 427 MPa. a shows the map of the residual plastic strain component along the loading direction with the superimposed grain boundaries determined by EBSD. The results from DIC show an average strain of 2.45%. Despite this residual average strain, with standard deviation of 0.33%, some regions could be resolved as having a much higher strain, showing localized strain concentration that can facilitate crack nucleation. The maximum plastic strain found was 3.20%. At this resolution and after this amount of loading, no tendency could be obtained relating grain misalignment and strain concentration at the boundary. b shows the residual transverse strain field, εyy, for the L-T specimen. The average DIC computed strain in this direction was found to be −0.74%, with standard deviation of 0.21%, in a range showed from −2% to 0. The in-plane shear strain field is shown in c. Ideally, the average for εxy would be zero. The average −0.15%, with 0.25% standard deviation, found in the experiment can be explained by bias and uncertainties in the DIC measurements, by any small eccentricity in the load due to non-uniformed grip holding, and by the fact that the DIC analysis covers less than 5% of the total gauge section. For this case, εxy strain values ranged from −1.1% to 1.1%.The T-L specimen was subjected to a 3.12% of total axial strain, reaching 489 MPa. The results from DIC showed 2.50% average axial strain for the analyzed area. a shows the map of residual plastic strains in the loading direction with the superimposed grain boundaries. For this case, the boundaries were represented for three different colors, according to the misalignment angle. Once again, we can visualize maximum localized residual plastic strain much higher than the average plastic strain, and lower strain regions with value less than half the computed mean strain. The strain range for this specimen is shown from 0.8% to 4.6%, with average of 2.50% and 0.76% standard deviation. The maximum resolved strain and the scatter are higher than the values computed for the L-T specimen.One interesting note is that there is a tendency for the axial strain to form isocurves extended perpendicular to the load direction. In the extreme left of a, there is a strip with high strain intensity. In this position, the EBSD resolved the topography as one long grain, what implies that any misalignment of the grains is lower than 5°. Right of this position, there is another strip with the lowest strain level. Even though this low strain band covers several identified grains over the y-direction, it can be seen that the predominance of misalignment angle between grains along the isocurves is lower than 15°, thus exhibiting low angle grain boundaries. b shows the residual plastic strain field perpendicular to the loading direction, εyy, for the T-L specimen. The average DIC strain in this direction was found to be −0.90%, with 0.23% standard deviation, in a range from −2% to 0. The in-plane shear strain field is shown in c. The average for εxy in this case was found to be 0.05%, with 0.30% standard deviation.The T-S specimen was subjected to a 3.10% of total axial strain, reaching 488 MPa. The results from DIC showed 2.26% average residual axial strain for the analyzed area. a shows the residual plastic strain component along the loading direction with the superimposed grain boundaries. The maximum localized residual axial strain shown is 3.8%. The DIC average was 2.26% with 0.40% standard deviation. b shows the residual transverse strain field, εyy, for the T-S specimen. The average DIC computed strain in this direction was found to be −1.16%, with 0.32% standard deviation, in a shown range in the saturated scale from −3.0% 0. The in-plane shear strain field is shown in c. The average for εxy in this case was found to be −0.10%, with 0.24% standard deviation. These are the largest variations for the transverse strain among all specimens. For this specific case, the resolution of the resolved strain was compromised for the large base pattern size, which was 10 µm. More specifically for the T-S specimen, it is necessary to work with a finer speckle size. Once again is important to emphasize that the most significant result of the present work is a consistent and repeatable process to speckle a specimen for DIC. Even though the average strain in grain level is herein accurately determined, strain localization at sub grain level is in some sort compromised for the large size of the current pattern. However, at this resolution and speckle size, we still can resolve strain concentration caused by the precipitates on order of a few microns. The spots with localized high strain level shown in the figures are coincident with large precipitates present in the material.The conversion of the orientation data files from the EBSD to an appropriate input file for the simulations required several steps. The hexagonal gridded data inside each orientation file was converted into a square grid format. DREAM3D implicitly capture the mechanical response of the precipitates, which is used to fit the Voce hardening law (). Thus, we average the hardening response of the precipitates over the slip systems, but do not explicitly include the precipitates within the microstructure of the crystal plasticity simulations. Future work will investigate precipitate morphology and orientation but is beyond the scope of the current paper.Anisotropic elastic constants for AA 7050 were obtained from Pereira et al. shows the macroscopic stress-strain relationship for the three specimen orientations for obtaining the generalized hardening law parameters. The final Voce hardening parameters from microscopic fitting are shown in The FFT-based formulation allows the imposition of a macroscopic strain rate, a macroscopic stress, or a combination of both as long as they are complementary. For this model, only the macroscopic strain rate, Ė, along the x-direction was imposed, while the other strain components were adjusted to fulfill stress-free conditions for the corresponding stress components.The microstructure profile was reconstructed from the three EBSD scans (L-T, T-L, T-S) and was modeled as a through-thickness columnar grain structure by leaving a 1 voxel thickness, so that the periodicity of FFT could evaluate the grains on the EBSD as extruded to infinity. Thus, three different unit cells are created as input to the simulations, assuming columnar grains in the third direction. We note this assumption is most valid in the case of the T-S sample, as the pancake shaped grains are elongated in the L direction due to the rolling process. But in general, given that each voxel is ~3 µm deep and that the actual thickness of the specimen is larger by several orders of magnitude, an infinitely columnar structure is a decent assumption during modeling of even the L-T and T-L samples.The minimum representative size for each specimen was found to be (i) T-L=232×190×1 voxels=1000×700×3.7 µm; (ii) L-T=361×290×1 voxels=938×754×2.6 µm, and (iii) T-S=372×297×1 voxels=967×772×2.6 µm. Given the fact that the simulation requires a periodic microstructure, a gas phase was added on the free boundaries and extra material was added on the constrained ones to achieve periodicity. Neither of these additional boundary constraints will be considered on the strain results shown in this section. With the gas phase and extra material added, the minimum size needed for a representative T-L simulation was found to be 256×256×1 voxels, while for L-T and T-S the size had to be at least 512×512×1 voxels. To ensure quality on the results, a final size of 1024×1024×1 was used on all models. The increase in size had no impact on the macroscopic behavior. However, it added resolution to the results of each sample's microstructural behavior. shows the simulated strain field maps side-to-side with the experimental measurements. As it can be observed, the simulations do not capture the exact heterogeneous strain distributions at the microstructural features as represented in the DIC-EBSD experiments, especially for the L-T and T-S specimens. However, the simulations correctly predict the statistical strain distributions for each crystallographic texture. shows the histograms and cumulative distribution functions obtained for each specimen, comparing simulation with experimental data. It can be seen in each case that at a nominal residual plastic strain of approximately ~2.4%, multiple locations exhibit strains greater than 5% (roughly double the applied macroscopic strain). This is quite an important ramification for design and structural analysis: even macroscopic loadings that are firmly within the elastic region of the material's behavior may create local plastic strains, thus resulting in hot spots that are prone to failure, especially for fatigue. The crystal plasticity simulation correctly predicted the complete strain distributions, which are important to predict failure. Within an engineering context, the accurate prediction of statistical distributions of local strains is important for materials and structural analysis of an ensemble of components.In general, the model results show that the strain concentration levels are in statistical agreement with the experiments. However, the strain distribution at the individual microstructural features from the simulations did not match the DIC results. The possible sources for these differences are explained as follows: (i) the different boundary conditions between the experiment and model, (ii) the role of the subsurface microstructure on the surface strain response, (iii) incomplete physics of the constitutive equations used in the materials models, and (iv) the resolution from 10 μm pattern DIC analysis.Regarding the boundary conditions, the simulation corresponds to a uniform average strain rate throughout the entire material, whereas experimentally this strain rate is only applied to the grips of the specimens. In the gauge section of the specimen, where the strain maps are characterized, the neighboring grains govern the exact boundary conditions. In other words, unlike the simulation the deformation is not truly uniform on the actual specimen. With respect to the subsurface microstructure, the computational model maps the surface grains and applies periodic boundary conditions (this is commonly called a 2.5 dimensional model). Thus, the model assumes grains to be columnar, which means that the underlying layers are formed by exactly the same structure shown on the surface. As demonstrated by Turner et al. The constitutive equations used within the crystal plasticity model may be incomplete. For instance, the exact form of the flow rule ignores the role of normal stress on the slip system or hydrostatic stresses. It should be noted that the crystal plasticity formulation used in this study does not include a characteristic length-scale, thus we cannot account for grain sizes within the microstructure. Given the pancake shaped grains in this rolled microstructure, the aspect ratio of the grains may have size effects on the resulting strain maps. Strain-gradient approaches would be necessary to account for these size effects. The EVP-FFT formulation does take into account grain morphology. Therefore, as earlier noted, we would expect the T-S model to be most accurate, since the pancake shaped grains are elongated normal to the T-S direction, thus the assumption of columnar grains is easily justified for this orientation. Additionally, the generalized Voce law assumes that all slip systems harden an equal amount, which ignores dislocation pile-ups on individual slip systems. It is also possible that, by using this isotropic Voce hardening law, previous directional hardening caused by the rolling of the material is not being taken into account when modeling strain on the TL-LT-TS directions. Thus, the residual strain distributions within the grains, prior to loading, should be orientation dependent due to the rolling process. Also, the Voce law reduces the amount of variables involved in the calculation, which in turn lowers the level of parametric uncertainty of the model and therefore benefits the computational time required to fit and run the simulations. For multiple hardening laws commonly employed, the resulting slip system activity can be quite drastic while satisfying the same macroscopic response The novel micro-stamping used in the present work has been shown to be very effective for digital image correlation. The 10 μm base pattern element was sufficient to provide grain level resolution for the AA 7050 T7451, at 10× optical resolution. For sub-grain resolution, it is recommended that a finer pattern be used and imaged at higher magnification. The process is totally controlled and repeatable, what justify efforts in development and production of finer stamps, on the order of micron or submicron. As can be seen from the strain field maps, the strain varies according to grain orientation and it is clearly affected by the neighbor grains. There is a tendency for the axial strain to form isocurves extended perpendicular to the load direction. This is more noticeable for the T-L specimen where the grains are elongated in that direction, favoring the condition. The experiments have shown that the maximum residual plastic strain at grain level was about twice the average residual plastic strain on the specimen.An EVP-FFT simulation was used to model uniaxial loading with respect to three different orientations of rolled AA 7050-T7451. The statistical nature of the strain fields were reasonably well predicted by the EVP-FFT simulation, as the maximum microstructural strains were roughly double the macroscopic residual plastic strains. This result has a profound influence on the material design that may experience failure (especially from cyclic or time dependent loadings) at stresses well below the macroscopic yield point of the material. It is noted that crystal plasticity could not accurately predict the heterogeneous strain fields at each microstructural features. A better match between model and real media has to be pursued for improving the prediction of spatial strain maps across the microstructure, as several reasons for discrepancies between the model and the experiment were discussed. In many engineering applications, it is more important to capture the statistical nature of heterogeneous strain, compared with matching the strain distribution at each microstructure feature, as the strain statistics can be used to predict the life of an ensemble of components.Transformed non-linear integral equationA new method for the non-linear deflection analysis of an infinite beam resting on a non-linear elastic foundationThe aim of this paper is to develop a new method of analyzing the non-linear deflection behavior of an infinite beam on a non-linear elastic foundation. Non-linear beam problems have traditionally been dealt with by semi-analytical approaches that involve small perturbations or by numerical methods, such as the non-linear finite element method. In this paper, in contrast, a transformed non-linear integral equation that governs non-linear beam deflection behavior is formulated to develop a new method for non-linear solutions. The proposed method requires an iteration to solve non-linear problems, but is fairly simple and straightforward to apply. It also converges quickly, whereas traditional non-linear solution procedures are generally quite complex in application. Mathematical analysis of the proposed method is performed. In addition, illustrative examples are presented to demonstrate the validity of the method developed in the present study.Transformed non-linear integral equationThe problem of an infinite beam on an elastic foundation is a very important topic in the field of engineering, including the strength analysis and engineering design of runways, highways, railways and the like. A number of researchers, going back to many years, have made their most important contributions in this area Real systems, however, always include non-linearities, which render this problem complicated and difficult to analyze. In the present study, we consider the problem of the static deflection of an infinite beam on a non-linear elastic foundation. When the non-linear effects of the elastic foundation are taken into account, numerical or semi-analytical solution procedures (for example, finite element analysis (FEA) or the perturbation method) are usually employed. Kuo and Lee However, there is a further point that needs clarification with regard to a weakness of the aforementioned methods. The semi-analytical solution procedures of the perturbation approach are strongly dependent on a small parameter: that is, they function only when the beam deflection is small. In the case of numerical solution procedures (for example, FEA), too much labor is required not only to make a discretization, including mesh generation, but also for numerical calculations. To overcome such difficulties, in this paper, we propose a novel numerical procedure (or method) for analyzing the non-linear deflection of an infinite beam on a non-linear elastic foundation under localized external loads. A similar iterative technique, based on the Banach contraction mapping theorem, was previously proposed and successfully applied to obtain the non-linear wave profiles of Stokes waves , a governing differential equation is presented for an infinite beam on a non-linear elastic foundation, its corresponding integral equation and the proposed iterative procedure. then presents an analysis of convergence and the uniqueness of the solution. Finally, numerical experiments that demonstrate the feasibility of the proposed procedure are presented in , followed by discussion and conclusions.We consider an infinitely long beam resting on a non-linear elastic foundation, as shown in . From the classical Euler beam theory, the vertical deflection u(x) that results from load distribution p(x) satisfies the fourth-order ordinary differential equation:where EI is the flexural rigidity of the beam, and E and I denote the Young’s modulus and the mass moment of inertia, respectively. As p(x) is the net loading, consisting of the applied loading w(x) downward and the non-linear spring force f[u(x)] upward, we have(For simplicity, we neglect the weight of the beam.) Substitution of Eq. Assume that the non-linear restoration f(u) is odd and analytic, such that we can have the Taylor series of f(u) around the equilibrium of u=0:where df(0)/du is defined as k, and f(0) is set to zero because f is an odd function. Therefore, Eq. becomes a non-linear ordinary differential equation for the non-linear deflection u:Because it may sometimes be advantageous to employ an integral operator in non-linear analysis, we transform the non-linear differential equation, Eq. , into an equivalent integral equation. For this purpose, we review the linear deflection solution when the non-linear spring force term N(u) is negligible., expressed in closed form in terms of the Green’s function , the Green’s function G is calculated by the complex contour integration as follows:G(x,ξ)=α2ke−α|ξ−x|/2sin(α|ξ−x|2+π4),whereα=k/EI4, it is assumed that the loading w(x) is sufficiently localized for u, du/dx, d2u/dx2 and d3u/dx3 to tend all towards zero as |x|→∞. must be equivalent to the following relation:, we finally arrive at a non-linear relation for u:u(x)=∫−∞∞G(x,ξ)w(ξ)dξ−∫−∞∞G(x,ξ)N[u(ξ)]dξwhich is classified as a non-linear Fredholm integral equation of the second kind for u.We succeed in deriving a non-linear integral equation for the non-linear beam deflection. On the basis of the derived non-linear integral equation, we propose a new solution procedure. We first define π(x) and λ(u) as follows: represents a linear part of the beam deflection, whereas λ(u) in Eq. , a non-linear functional of u, corresponds to the non-linear part of the deflection, i.e., the difference between the linear and non-linear solutions of the deflection. Thus, Eq. shall be our starting point for calculation of the non-linear beam deflection: we propose an iterative method to solve where β(≠−1) is a constant, which is introduced to improve the convergence condition. We call this constant β, the convergence parameter in this study. The effect of a non-zero convergence parameter is shown in analytically and numerically, respectively.In this section, we show that the proposed method converges under certain conditions. We also explain a benefit of the introduction of β in Eq. , that is, how β affects the convergence of the iterative solution by Eq. It should be noted that if the sequence un in which reduces to the original integral Eq. through the cancellation of βu from both sides. First, we investigate the existence of a solution to , as well as its uniqueness. Second, we show that the solution can be constructed by successive iteration. We also examine the condition in which the successive iteration converges.Letting X be a set of real continuous functions with a compact support, say [−R, R], for a real positive R, we define an operator ψ:X→X, such thatfor u∈X. With the aim of showing whether the operator ψ is a contraction, we calculate the sup-norm difference between ψ(u) and ψ(v) for u,v∈X:‖ψ(u)−ψ(v)‖∞=1|β+1|||β(u−v)+λ[u]−λ[v]||∞≤1|β+1|{|β||u−v||∞+||λ[u]−λ[v]||∞}where the triangular inequality is used, and the notation ‖⋅‖∞ represents the sup-norm. With , the term ‖λ[u]−λ[v]‖∞ that appears in the inequality ‖λ[u]−λ[v]‖∞=||∫−∞∞G(x,ξ)N[v(ξ)]dξ−∫−∞∞G(x,ξ)N[u(ξ)]dξ||∞=||∫−∞∞G(x,ξ){N[u(ξ)]−N[v(ξ)]}dξ||∞≤∫−∞∞‖G(x,ξ)‖∞|N[u(ξ)]−N[v(ξ)]|dξNoting ‖G(⋅,ξ)‖∞=α/2k by the Green’s function and assuming N to be Lipschitz on X, we know from ‖λ[u]−λ[v]‖≤α2k∫−RR|N[u(ξ)]−N[v(ξ)]|dξ≤αRk‖N(u)−N(v)‖∞≤αRρk‖u−v‖∞‖ψ(u)−ψ(v)‖∞≤1|β+1|{|β|‖u−v‖∞+αRρk‖u−v‖∞}=1|β+1|(|β|+αRρk)‖u−v‖∞Thus, the operator ψ is a contraction mapping, as asserted, if the following inequality Because the introduced set X, which is equipped with ‖⋅‖∞, is a complete normed space, the mapping ψ has a fixed point under the condition 1/|β+1|(|β|+αRρ/k)<1. Therefore, u=ψ(u) has a unique solution that can be constructed by successive iteration by the Banach theorem. Note that the ratio κ≡αρ/k=ρ/k3EI4 is an inherent quantity for a given problem, because k is a linear spring constant and ρ is associated with non-linear elastic foundation:which reveals the desired conditions for the contraction mapping and the convergence of the iteration in Eq. is that the larger linear spring constantk, the smaller R and the weaker non-linearity ρ contribute to the faster convergence of the iterative Eq. The role of convergence parameter β can be appreciated through error analysis and the numerical parametric study in . Here, we show that convergence parameter β alleviates a condition of convergence and allows more cases to converge. Suppose we solve the equation:iteratively, i.e., using the fixed-point iteration:The error of the approximate solution at the nth step is denoted byen+1=1β+1(βxn+f(xn))−x=1β+1(βen+f'(ξ)en)=1β+1(β+f'(ξ))en,ξ∈[xn,x](A).(A) illustrates that the error decreases in magnitude when −1<(β+f′(ξ))/(1+β)<1. This condition can be written as −2β−1<f′(ξ)<1 for β. If β is not introduced, i.e., β=0, then the error decreases only when −1<f′(ξ)<1.Hence, a positive convergence parameter β allows the convergence of the fixed-point iteration in more cases of f(x), which does not satisfy −1<f′(ξ)<1. However, there is a price to pay: the non-zero convergence parameter β increases the magnitude of the convergence ratio, and more iterations are required. This can be appreciated by comparing the magnitude of −1<(β+f′(ξ))/(1+β)<1 with f′(ξ). These positive and negative effects of convergence parameter β on iteration are demonstrated in According to the numerical integration rule, Eq. un+1(xi)=1β+1{βun(xi)+π(xi)+λ[un(xi)]},i=0,1,…,Nλ[u(xi)]=−∑j=1NwijG(xi,ξj)N[u(ξj)],i=0,1,…,Nwhere wij denotes appropriate weights for the integration rule. For the numerical integrations in Eqs. , the infinite limits are replaced by a large value R. The number N for the summation in Eqs. represents the number of segments in the interval (−R,R). Using matrix–vector notation, Eq. In this section, we demonstrate that the proposed iterative method has a good convergence property in many non-linear restoring forces. Also, the introduction of constant β proves to be very useful giving us another dimension of convergence control: by choosing a different β, we can make the iterations converge. We first assume that the non-linear restoring force f(u) in Eq. can be modeled as polynomials, i.e., f(u)=ku+γup with the non-linear termN(u)=γup, without loss of generality.The errors of the approximate solutions at the nth iteration are defined as follows:Error(n)≡‖u(x)−un(x)‖2‖u(x)‖2where‖z‖2≡(∑i=1N|zi|2)1/2To demonstrate that the iterative method converges to an exact solution, we first assume the exact solutions and then obtain the corresponding external loads. The external loads that correspond to the assumed displacements are obtained from the governing Eq. by simple substitution into u(x). We choose three different cases, which are listed in For example, the external loading that corresponds to the exact solution u(x)=e−x2 of case (1a) in turns out to be the following equation:w(x)=12e−x2−48x2e−x2+16x4e−x2+ke−x2+γe−3x2The three different cases are considered as listed in . The exact solutions u(x) are chosen to be infinitely differentiable and become extremely small when far way from x=0. The exact solutions in , and their corresponding loadings are shown in . In all cases, we assumeE=I=k=γ=1, and the order of monomial p=3 in the non-linear elastic term f(u)=ku+γup without loss of generality. The constant β in Eq. is set to 10 unless stated otherwise. In cases (1a–1c), the interval on the x-axis, where the displacements are not negligible, varies for comparison. In this study, the separation between beams and foundations are not taken into account, since it leads into a non-linearity which changes the governing equation. The proposed method thus cannot be applied to such separation problems. A modification of the method to handle such problems is one of the future research. The negative deflections of the beams mean that the non-linear spring foundation considered is in the state of a non-linear expansion.With the information on the loading conditions shown in , the desired non-linear deflections of an infinite beam on a non-linear elastic foundation are obtained using the iterative method . The calculated non-linear deflections u(x), with zero initial guess u0=0, for the three different cases when the number of iterations n=100 are depicted in and show good agreement with the exact solutions. illustrates the convergence behavior of the calculated solutions. As can be seen in this figure, the iterative solutions are close to the exact solutions only after the first few iterations. The errors of the approximate solutions are plotted against the iteration number in . The errors decrease monotonically in most cases, and for different β's reach the steady state error within 50–200 iterations.We also check the convergence behavior of the approximate solutions at different convergence constants β. The effect of constant β in Eq. on the number of iterations is shown in shows that the number of iterations is expected to increase as constant β becomes larger. This is clearly demonstrated numerically in for all cases (1a–1c). It should also be pointed out that, although the introduction of β makes the convergence slower than β=0, the benefit of convergence parameter β is that it can make the iterations converge, i.e., β renders the mapping in Eq. contraction mapping, which is demonstrated in the following section.We test the proposed iterative procedure with different external loadings and non-linear elasticity. More specifically, we change the amplitude of the external loading and the polynomial order of the non-linear spring force. As the analysis demonstrates, these numerical experiments confirm that the convergence is fast. A smaller convergence parameter β works for smaller external loading and weaker non-linearity.For more systematic study, the original governing equation with non-linear spring force N(u)=γup in Eq. is non-dimensionalized. The characteristic length L is taken as the length of the set S={x|u(x)≤ε,x∈R}, where ε is a reasonably small number. Hence, L is analogous to the length of the compact support of the exact solution, where u(x) is negligibly small outside this set. For example, when the exact solution u(x)=e−x2/σ is assumed, we obtain L=max(S)‐min(S)=4σ for ε=e−4. With this characteristic length L, the space variable x and the displacement u(x) in Eq. can be cast into non-dimensional forms x⁎=x/L and u⁎=u(x)/L. The non-linear beam equation with non-linear spring force N(u)=γup is rewritten asWe must point out that the units of bending rigidity EI, the linear spring constant k, the non-linear spring constant γ and the distributed loading w(x) are Nm2, N/m2, N/mp+1 and N/m, respectively, where N denotes the unit of force. The coefficients kL4/EI and γLp+3/EI, and the loading w⁎, in Eq. are confirmed to be dimensionless in a straightforward manner. Using exact solutions similar to those used in , we employ the exact solutions u(x)=Ae−x2/σ and N(u)=γup, where A=1, 0.5 and 0.1, σ=1 and 4, and p=2, 3, 4 and 5. The tested cases are summarized in , the non-dimensional parameter kL/γLp denotes the ratio of the linear and non-linear spring non-dimensional coefficients in Eq. . Parameter kL/γLp is varied by changing γ. For a given kL/γLp, if the order of polynomial p in the non-linear spring force varies, then γ is adjusted accordingly, such that kL/γLp remains constant. The numbers in the tables show the lower boundary of β, below which the iterations do not converge. For example, in case (2a), where p=2 and kL/γLp=0.02, the iteration converges when β>6 and diverges otherwise. Another example is case (2c), where p=3 and kL/γLp=0.02, and the iteration converges when β>0.1 and diverges otherwise. that there is one case in which the proposed method fails to produce an accurate numerical solution, that is, case (2a), with kL/γLp=0.01 in which the displacement is comparable to the interval of external loading, does not converge. When the ratio of the displacement to the loading interval decreases, as in cases (2b–2d), the method converges well with or even without the introduction of the convergence parameter β. In many practical cases, such as the displacement of rails or the highway under trains or cars, the ratio of displacement to the loading interval is small.Hence, the proposed iterative method provides accurate solutions for many practical engineering problems. From the results shown in the tables, it can be inferred that the lower boundary of convergence coefficient β becomes small when parameter kL/γLp becomes large. The lower boundary of convergence coefficient β also becomes small when the order p of non-linear spring function N(u)=γup becomes large. Although kL/γLp is fixed, if we increase the orderp, then up becomes even smaller. Hence, the contribution from the non-linear spring becomes smaller, and the convergence becomes faster and operates better for a small β, as shown in Finally, we test more complicated forms of the non-linear elastic terms, i.e., N1(u)=γu2+γu5 and N2(u)=γu3+γu5. For these non-linear terms, we obtain similar results to those seen in cases (2a–2d), as noted above.For smaller A, the amplitude of the exact solution, and a higher p, better convergence is guaranteed. The case of N2(u) converges even when the case of N1(u) diverges, as the same trend is observed in the foregoing simulations.The Dirac delta function in the distribution theory can describe concentrated loadings in mechanics. In fact, the Dirac delta function δ(x) and the (generalized) derivative δ′(x) of the Dirac delta function, physically, correspond to point force and moment, respectively. If a system of point forces is assumed to befor a constant Γk, which indicates the strength of k-th point force, the linear part of solution, π(x), to Eq. π(x)≡∫−∞∞G(x,ξ)w(ξ)dξ=∫−∞∞G(x,ξ)∑kΓkδ(ξ−xk)dξ=∑kΓkG(x,xk)In a similar way, in the case of point moments, an analytic expression for π(x) to Eq. π(x)≡∫−∞∞G(x,ξ)w(ξ)dξ=∫−∞∞G(x,ξ)∑kγkδ′(ξ−xk)dξ=−∑kγkGξ(x,xk)with the point moments modeled as w(x)=∑kγkδ′(ξ−xk), in which the constant γk denotes the strength of k-th point moment.These analytically computed results of π(x) for point forces and moments can be implemented into the iterative procedure of Eq. to obtain the non-linear solutions. Hence, introducing the concentrated loading does not pose any difficulty or change in the proposed method.In this work, a new iterative method that is simple and straightforward to apply compared with the existing methods is proposed to obtain the non-linear deflection of an infinite beam on a non-linear elastic foundation.For this purpose, we newly formulate a non-linear integral equation that relates the non-linear deflection of the beam to the non-linear elastic foundation. On the basis of this formulated equation, we employ extensive numerical experiments to prove that the proposed iterative method operation works satisfactorily. Furthermore, by means of additional numerical experiments, the characteristics of the proposed iterative scheme are also examined, and the range and rate of convergence are discussed.Analysis of contraction mapping shows that better convergence is guaranteed with a stronger linear spring, a smaller compact support of the solution and a weaker non-linear spring force, either through a smaller non-linear spring coefficient or a smaller magnitude.Tuning the properties of a complex disordered material: Full factorial investigation of PECVD-grown amorphous hydrogenated boron carbideA multiresponse 25 full factorial experiment is performed to investigate the effects of growth conditions (temperature, power, pressure, total flow rate, partial precursor flow rate) on the chemical, mechanical, dielectric, electronic, and charge transport properties of thin-film amorphous hydrogenated boron carbide (a-BxC:Hy) grown by plasma-enhanced chemical vapor deposition (PECVD) from ortho-carborane. The main and interaction effects are determined and discussed, and the relationships between properties are investigated via correlation analysis. The process condition with the strongest influence on growth rate is pressure, followed by partial precursor flow rate, with low pressure and high partial flow rate conditions yielding the highest growth rates. The atomic concentration of hydrogen (at.% H) and density are controlled primarily by temperature and power, with low temperature and power conditions leading to relatively soft, hydrogen-rich, low-density, porous films, and vice versa. The B/C ratio is controlled by temperature, power, pressure, and the power*pressure interaction, and is uncorrelated to hydrogen concentration. Thin-film dielectric and electronic structure properties, including high-frequency dielectric constant (ε1), low-frequency/total dielectric constant (κ), optical band gap (ETauc/E04), and Urbach energy (EU), are correlated strongly with at.% H, and weakly to moderately with B/C ratio. These properties are dominated by the influence of temperature, with a second significant influence from the power*pressure interaction. The interaction of power and pressure leads to two opposite growth regimes—high power and high pressure or low power and low pressure—that can produce a-BxC:Hy films with similar dielectric or electronic structure properties. Charge transport properties also show a correlation with at.% H and B/C, but not with the electronic structure and disorder parameters, which suggests a complicated relationship between the two. The range of properties measured highlights the potential of thin-film a-BxC:Hy for low-κ dielectric and neutron detection applications, and suggests clear pathways for future material property optimization.Recent decades have witnessed an explosion of new and complex materials vying for a role in up-and-coming technologies, as well as next-generation technologies in need of perfectly tailored materials. Ambitious and coordinated materials characterization and design efforts, such as those envisioned by the Materials Genome Initiative Thin-film a-BxC:Hy has been under growing consideration for various specialized technologies. As a boron-rich solid, it has a high cross-section for thermal neutron capture and is therefore of interest for nuclear applications ranging from reactor coatings From an experimental perspective, significant groundwork must be laid to position a new material for widespread adoption. However, more often than not, such efforts lack sufficient direction and many years are required for applications to come to fruition. In the case of a-BxC:Hy, dozens of materials growth and characterization studies have been published A previous study by Nordell et al. looked at the effects of temperature and power on the growth of PECVD a-BxC:Hy as well as the influence of hydrogen content on a wide range of material properties Amorphous hydrogenated boron carbide (a-BxC:Hy) films were grown using a previously described A 25 full factorial experiment design was created using the low (−1) and high (+1) values summarized in . The ranges were chosen so as to stay within the operational limits of the PECVD system as well as to maintain the integrity of the PECVD growth. Five replicate center point (level “0”) growths were additionally included to independently assess error within the process in lieu of the more experimentally costly option of replicating each individual run. The conditions for each growth are summarized in . All films were grown for 12 min on a combination of 15 × 15 mm silicon [1–15 Ω·cm p-type Si(100)] and glass substrates. The order of the growths was randomized to mitigate any systematic variation.A detailed description of measurement and analysis methods can be found in Nordell et al. ) were analyzed to ascertain significant effects. Main effects and interaction effects, as well as their standard errors, t-ratios and P-values . Standard deviation values (σ) were independently calculated from the center point runs, and are additionally included in . Main effects are defined as the effect of each individual factor (i.e., each individual PECVD process parameter) on a given response (i.e., property). The effect of factor A, for example can be represented by the difference between the average response value at all high levels of A (A+) and the average response at all low levels of A (A–):Interaction effects occur when the effect of one factor depends on another. The interaction effect between factors A and B, for example, can be represented as one half of the difference between the effect of A at the high level of B and the effect of A at the low level of B:Effect(AB) = ½{[Avg(A+)–Avg(A–)]B+ – [Avg(A+) – Avg(A–)]B–}The statistical significance of each effect was evaluated from the t-ratio and P-value, where the t-ratio is given as the magnitude of the effect divided by its standard error, and P-values <0.05 indicate significance at the 95% confidence level.To aid in evaluating correlations between responses (properties), the Pearson and Spearman rank-order correlation coefficients (r and rS, respectively) , respectively. The Pearson correlation coefficient is a measure of the strength and direction of the association between two variables that are linearly correlated. From an examination of scatter plots, the majority of the sets of variables that exhibit correlation appear to be linearly correlated, and therefore the Pearson correlation coefficient is expected to do an adequate job of estimating correlation strength. In a few cases, however, the variables are correlated in a clearly non-linear fashion or the data exhibits heteroscedasticity (changing variance along the line of best fit) or contains distinct outliers. In such cases the Spearman correlation coefficient may do a better job of estimating correlation strength. From a comparison of r and rS values, we observe that the two are generally very close, and conclude that either set should be able to establish gross trends in the data. Although various rules of thumb exist, we will define a strong correlation as one with r/rS > 0.6, a moderate correlation as one with 0.3 < r/rS < 0.6, and a weak correlation as one with r/rS < 0.3.Although the process parameter effects can be evaluated quantitatively from alone, a graphical analysis is very useful. We have produced Pareto charts and interaction effect plots for a majority of the responses. Pareto charts are simple bar graphs that allow for a quick assessment of relative magnitude and statistical significance of the different effects (colored bars represent statistically significant effects). Interaction plots provide a visual means for understanding the interactions between factors: most simply, the more the lines deviate from parallel, the greater the interaction., it is evident that pressure and partial flow rate have the greatest effect on growth rate. High pressure conditions decrease growth rate, while high partial flow rate conditions increase it. The interaction between these two variables is also relatively significant, and we can observe in the interaction effect plot [(b)] that the influence of pressure is greater at high partial flow rates. Power and total flow rate also demonstrate statistically significant effects, although their magnitude is smaller. An increase in growth rate with increasing partial pressure of precursor would be expected based on the higher delivery rate of reactive species to the substrate surface. A decrease in growth rate with increasing pressure can also be explained by the smaller mean free path of reactive species and thus lower delivery rate to the substrate surface. The fact that temperature does not show a significant effect on growth rate suggests that growth rate is primarily limited by mass transport rather than reaction rate, which is consistent with a (relatively) low temperature plasma-assisted CVD process. Further, the fact that increasing power is correlated with a decrease in growth rate may suggest that film growth is not accelerated by increased ion bombardment by higher energy ions, as one might expect In the case of atomic concentration hydrogen (at.% H), only growth temperature and power are observed to be significant effects, where increasing either parameter leads to a decrease in hydrogen content, with the effect of temperature being about twice that of power []. Similar dominant temperature and power effects are observed for density, but of opposite sign, where increasing either parameter leads to an increase in density []. These observations are consistent with our previous investigation of a-BxC:Hy films grown while varying temperature and power only (a), with an associated Pearson correlation coefficient, r, of −0.84. The effects of temperature and power on hydrogen content and density are not surprising. Increasing temperature presumably allows access to thermodynamically favorable reactions involving H removal and/or cross-linking, while the greater ion energy and ion bombardment from increasing power can facilitate H removal, formation of reactive sites, and overall densification. It is noteworthy that pressure or the power*pressure interaction do not show up as statistically significant effects, like they do for many other properties (vide infra).Young’s modulus (E) and hardness (H) were investigated through nanoindentation experiments. Because only a subset of samples was selected for these measurements, there is not enough data for E and H to perform a full statistical analysis. We note, however, that these two properties display a direct linear correlation with each other [r = 0.99; ]. Thus, we can assume that these mechanical properties can be mapped directly by either at.% H or density and would depend similarly on growth conditions (i.e., primarily temperature and power). We have also determined pore size in a selection of films using positron annihilation lifetime spectroscopy (PALS). In six low-density films (0.8–1.1 g/cm3), pore diameters of 0.67–0.73 nm were measured. In one high-density film (N16, 1.9 g/cm3, 20% H) measured, no pores could be detected, indicating that any pores in this film are below the PALS lower detection limit of ∼0.3 nm. The results for the three moderate-density films studied are mixed: for film N37, with a density of 1.44 g/cm3 (32% H), a pore diameter of 0.68 was measured, whereas for films N32 (1.51 g/cm3, 29% H) and N4 (1.43 g/cm3, 21% H), no pores could be detected. The density measured for N4 is likely anomalously low [it appears as a clear outlier in the at.% H vs density plot in ], and this film should probably be considered in the higher density range. In the case of N32 and N37, it is possible that these lie very near the percolation threshold, which may explain the detection of pores in one case but not the other. Overall, the general correlation between pore diameter and density suggests that porosity is also predominantly controlled by growth temperature and power.The effect of process parameters on the B/C ratio are very different than those on atomic concentration hydrogen. Temperature, power, pressure, and the power*pressure interaction are all significant effects (). In particular, increasing power and increasing pressure are both correlated with a lower B/C ratio, and thus increased carbon content. The interaction between the two is such that the effect of pressure is greater at high power, and vice versa. There is also a noteworthy overlap between the effect profiles for B/C ratio and growth rate: although not all the same effects register as statistically significant in both cases, their signs and relative magnitudes follow the same trend. This observation prompted us to look more closely at the correlation between these two responses. Indeed, the B/C ratio does show a strong correlation with growth rate [r = 0.74, ]. Further, the B/C ratio and the at.% H are not at all correlated with each other [r = 0.03, ]. Since the correlation between B/C ratio and growth rate is clearly non-linear (although it may be considered linear below a plateau of ∼4.5–5), this is an example where the Spearman ranked correlation coefficient may be more appropriate, and in this case, it indicates a slightly stronger correlation than the Pearson coefficient (rS = 0.81). One explanation for the correlation between growth rate and carbon content can be traced back to the plasma chemistry associated with low growth rates. At higher pressure and power growth conditions, a higher frequency of collisions with higher energy ions would be expected, which could conceivably lead to a greater number of dissociated carborane molecules and free carbon-based reactive species reaching the substrate surface and incorporating into the thin film.The high-frequency (ε1), low-frequency (total, κ), and intermediate-frequency (κ – ε1) dielectric constant speak to the electronic, orientation, and distortion contributions to the total polarization response of a material ]. The second greatest effect is the power*pressure interaction. The large interaction between power and pressure implies that each of these variables does in fact have a significant effect, but that the sign of this effect depends on the level of the interacting factor []. Thus, at low pressure, increasing power increases ε1, while at high pressure, increasing power decreases ε1—a result that is not evident from the main effects alone. A similar argument can be made for the effects of pressure as a function of power level. From this information, we can identify two opposite regimes that would lead to a low ε1: low power and low pressure or high power and high pressure. These results are consistent with our previous study investigating the effects of temperature and power only, where increasing power led to an increase in ε1 while holding the pressure constant at a “low” value of 0.2 Torr Temperature also has a large, statistically significant effect in the response of κ []. No other effects are statistically significant, but their magnitudes generally follow the same trends as in the case of ε1; thus, similar power and pressure effects and interactions are expected. As evident from , κ correlates quite closely with ε1 (r = 0.91); however, from , we see that the difference between the two (κ – ε1) is higher at lower ε1 values. In previous work, we hypothesized that this increase in κ – ε1 was correlated with increased oxygen content, leading to a higher concentration of polar bonds ], but weakly (r = 0.36), and in fact the correlation between κ – ε1 and at.% H is stronger (r = 0.56). This suggests that there may be an additional chemical/physical mechanism underlying this result. The most significant effect on κ – ε1 is temperature [], but its magnitude is of the opposite sign as in the case of ε1 or κ. Several other main and interaction effects are also of statistical significance, including temperature*pressure and temperature*flow rate interactions, which suggests that even at low temperature, it may be possible to minimize κ – ε1 if higher pressures and flow rates are used.Next, we look at electronic structure parameters obtained from optical transmission spectroscopy measurements including optical band gap (Eg) as well as Urbach energy (EU) and Tauc parameter (B1/2), two measures of disorder in amorphous materials ]. In terms of the effects of the process parameters on Eg, EU, and B1/2, the most significant is growth temperature, and the second most significant is the power*pressure interaction, similarly to the case of the dielectric properties (). Once more, two growth regimes are identified: high pressure and high power, or low pressure and low power. Another interaction effect that may be relevant because it points to the existence of opposite growth regimes is the flow rate*partial flow rate interaction. Although not statistically significant, the same trends in this interaction effect are observed for ε1, κ, κ – ε1, Eg, EU, and B1/2, which suggests that the effect should be taken into account when evaluating the influence of process conditions. As we have found before for a-BxC:Hy). This interrelationship between band gap and disorder parameters is typical for amorphous semiconductor materials such as amorphous hydrogenated silicon Lastly, we turn to the charge transport response parameters, electrical resistivity (ρ), field-dependent mobility (μF, 0.1 MV/cm), and charge carrier concentration (n). We have only performed a statistical analysis of effects on the electrical resistivity response; due to the nature of the electrical measurements (i.e., the possibility of low-field dielectric breakdown or inability to do a proper space-charge-limited current analysis), μF and n could only be reliably determined for a subset of samples and these data sets are therefore missing a relatively high number of data points. As can be seen from , many main and interaction effects exhibit a relatively large magnitude in the case of resistivity, but only the partial flow rate effect is considered statistically significant. The temperature, power, and temperature*power effects are all on the threshold of significance (0.05 < P < 0.1), with several other interaction effects not far behind. The electrical transport property responses exhibit the greatest error (typically an order of magnitude standard deviation; ), and the poor statistics are compounded by the high number of missing data points. However, overall the effect profile suggests that the effects of growth conditions on charge transport properties may be quite complex. Because ρ, μ, and n are related via ρ = 1/enμ, we expect a correlation between variables. Indeed, there is a strong linear correlation between log(μF) and log(ρ) [r = −0.82; ], and a strong linear correlation between log(μF) and log(n) [r = −0.77; ], but not between log(n) and log(ρ) [r = 0.29; ]. In the latter case, the log(n)–log(ρ) scatter plot appears to define a plane rather than a line. Indeed, when all three variables are plotted against one another in 3D space, this plane is clearly defined [] by four extremes: low μ (10−14), high n (1017) and high ρ (1016); high μ (10−9), low n (1014), and moderate ρ (1013); high μ (10−9), moderate n (1016), and low ρ (1012); and moderate μ (10−11), moderate n (1015), and moderate ρ (1014).Amorphous hydrogenated boron carbide (a-BxC:Hy) is a complex material hypothesized to be composed of partially hydrogenated, partially cross-linked icosahedral carborane units and hydrocarbon groups, the precise configuration of which remains unknown ] that also contributes to defining a-BxC:Hy material properties: the B/C ratio. We have plotted a series of properties as a function of both at.% H and B/C ratio (), and contrast the associated Pearson correlation coefficients in . The majority of the properties demonstrate a strong correlation with at.% H and a weak to moderate correlation with B/C ratio. The most pronounced exception—other than the growth rate, which is strongly correlated to B/C ratio as previously discussed—is the B1/2 parameter, which demonstrates a stronger correlation with B/C (r = −0.68) than with at.% H (r = 0.50). For both μF and n, the correlations with at.% H and B/C are of comparable magnitude. For some properties, scatter plots point to a correlation with B/C ratio, although not necessarily a linear one. For example, the ε1–B/C [] scatter plots appear to define triangular planes, seemingly begging for a third dimension. Another notable observation lies in the sign of the correlation coefficients: for E, H, ε1, κ, κ – ε1, ETauc/E04, EU, and B1/2, the correlation coefficients with at.% H and B/C are of opposite sign; however for μF and n, they are of the same sign. Thus, as can be observed in , ε1, for example, is minimized at high at.% H but low B/C, while μF is maximized at high at.% H and high B/C. To investigate a possible interaction between the at.% H and B/C variables, we have defined two new parameters: the product and quotient of the B/C ratio and the at.% H. From , we see that indeed ε1, κ, ETauc/E04, EU, and B1/2 are all better correlated with the combined (B/C)/at.% H parameter than the individual parameters, while μF and n are better correlated with the combined (B/C)*at.% H parameter. These correlations are illustrated three dimensionally in The correlation between B/C ratio and material properties suggests that carbon content presents an influence distinct from that of hydrogen content, whose role was previously rationalized primarily in terms of its effect on coordination number and mass/electron density. For ε1, κ, ETauc/E04, and EU, although B/C has a moderate effect, the effect of H is clearly dominant. For B1/2, the strong correlation to B/C ratio may speak to the influence of C on the density of states features which contribute specifically to the B1/2 parameter The fact that the electrical transport properties (ρ, μF, and n) do not show the same behavior as the electronic structure properties (Eg, EU, B1/2) was surprising to us, as we had originally hypothesized that we might see a strong correlation between, for example, mobility and band tail width—as parametrized by Urbach energy—as has been observed in a-Si:H and related materials and typically interpreted in the context of a multiple trapping model In terms of optimization of properties through controlling growth conditions, some comments can be made. We have concluded that hydrogen and density are primarily controlled by growth temperature and power, while B/C ratio is primarily controlled by power, pressure, and the power*pressure interaction, with temperature and partial flow rate as secondary influences. The B/C ratio appears to be related to growth rate, and to depend on a very similar set of process parameters, which also likely include weaker contributions from total/partial flow rate as well as interactions between pressure and flow rate. The at.% H and B/C ratio appear to not be correlated to each other at all, which is of interest as one might predict a correlation between at.% C and at.% H associated with hydrocarbon content as is typically observed in a-C:H and a-SiC:H Many of the optical/electronic properties (ε1, κ, Eg, and EU) are most strongly affected by growth temperature and most strongly correlated to at.% H, which is also most strongly affected by growth temperature. However, these same properties also show a dependence on power and pressure (including importantly the power*pressure interaction), as well as a weaker correlation to B/C ratio, which is in turn more strongly influenced by power and pressure than by temperature. Thus, while growth temperature clearly plays a dominant role in determining material properties, power, pressure, and total/partial flow rates, as well as their interactions, also play a non-negligible role. Because the latter conditions were not varied in our previous work investigating the effects of power and temperature The overall low dielectric constant, high hardness and Young's modulus, and high electrical resistivity of the a-BxC:Hy films is promising toward their use as low-κ dielectrics. The process–parameter analysis suggests pathways for optimizing a-BxC:Hy films for this application, specifically by applying low growth temperature, either at high power and high pressure or at low power and low pressure. Films grown using the first set of conditions (N15, N23, and N31) yielded on average values for ε1, κ, and κ – ε1 of 2.4, 3.1, and 0.7, respectively, while films grown using the latter set of conditions (N1, N9, N17, and N25) yielded on average values for ε1, κ, and κ – ε1 of 2.6, 3.7, and 1.1, respectively. Because high power and high pressure growth conditions lead to higher electrical resistivities (i.e., lower leakage currents), and minimize the κ – ε1 contribution, these may be more favorable. In addition, although low temperatures minimize the ε1 value, they also increase the κ – ε1 contribution and decrease ρ, which suggests that a slightly higher temperature may be more optimal. Finally, a combination of high flow rate and high partial flow rate may prove beneficial in minimizing the κ – ε1 contribution, and exploring conditions beyond those applied in the present work could be worthwhile.For semiconductor applications where charge carrier mobility is important, such as neutron detection Through a multiresponse 25 full factorial analysis of a-BxC:Hy films grown by PECVD from ortho-carborane, we have investigated the influence of five process parameters (growth temperature, RF power, pressure, total flow rate, and precursor partial flow rate) on a wide range of material properties. Thin-film properties were found to vary quite widely, within similar ranges as previously observed Growth rate is most strongly influenced by pressure followed by precursor partial pressure, with high growth rates upwards of 100 nm/min achieved at low pressures and high precursor dilutions, and rates on the order of 1 nm/min for the opposite conditions. Hydrogen and density, as previously observed, are primarily controlled by growth temperature and power, while the B/C ratio is primarily controlled by power, pressure, and the power*pressure interaction, with a strong correlation to growth rate and the factors influencing growth rate. High temperature and power conditions lead to hard, hydrogen-poor, high-density, low-porosity films, and vice versa. Dielectric and electronic structure properties (ε1, κ, ETauc/E04, and EU) are found to correlate most strongly with at.% H, but also to some extent with B/C ratio, and are influenced accordingly by growth conditions, showing a dominant effect from temperature, and secondary effect from the power*pressure interaction. The electrical properties also appear to be moderately correlated with at.% H and B/C ratio, but not in the same way as the electronic structure parameters, and thus no straightforward correlation between electronic structure and charge transport is observed.Overall, the results of the 25 full factorial analysis give insight into the a-BxC:Hy film growth process, demonstrate the extreme tunability of this material, and direct us to the ranges of PECVD parameter space optical for producing films with properties needed for low-κ dielectric, neutron detection, and other next-generation applications.The following is the supplementary data related to this article:Supplementary data related to this article can be found at http://dx.doi.org/10.1016/j.matchemphys.2016.02.013A 12 MHz micromechanical bulk acoustic mode oscillatorWe demonstrate a bulk acoustic mode silicon micromechanical resonator with the first eigen frequency at 12 MHz and the quality factor 180 000. Electrostatic coupling to the mechanical motion is shown to be feasible using a high bias voltage across a narrow gap. By using a low-noise preamplifier to detect the resonance, a high spectral purity oscillator is demonstrated (phase noise less than −115 dBc/Hz at 1 kHz offset from the carrier). By analyzing the constructed prototype oscillator, we discuss in detail the central performance limitations of using silicon micromechanics in oscillator applications.Mechanical resonance is widely applied in high-precision oscillators: a typical example is quartz-crystals used in a multitude of time-keeping and frequency reference applications . We further show that effective electrical coupling to the stiff mechanical vibration mode is feasible electrostatically using a high bias voltage across a narrow gap. To benchmark the resonator properties, we construct a high spectral purity oscillator. The measured phase noise Lf≈−115 dBc/Hz at 1 kHz offset from the carrier shows significant improvement over other existing micromechanical oscillators shows a scanning electron microscope (SEM) view of the bulk acoustic mode silicon micromechanical resonator. The resonator vibrates in a lateral length-extensional mode, where the two arms move in anti-phase as illustrated by the arrows in . The first eigen mode is observed when the length of the resonator arm L equals a quarter of the bulk acoustic wavelength λ:). The surface plane orientation was (1 0 0) and the resonator arms were in [1 1 0] crystal direction. For electrical conductivity, the silicon wafer was heavily boron-doped (ρB≈5×1018
cm−3). The length of the resonator arm was L=180 μm, measured from the center of the anchoring bridge of width wa=8 μm and length La=80 μm. The width w of the resonator arm was 10 μm. The moving structure was released from the 1 μm thick oxide layer by wet-etching with hydrofluoric (HF) acid (). In order to prevent capillary forces to drag the devices onto the surrounding structures, drying was performed using a sublimation technique. The narrow coupling gaps at the resonator end-points were designed to have a value d=1.0 μm. A detailed SEM-inspection revealed highly vertical sidewalls of the gap.The resonator was symmetrically biased and driven as shown in . A high dc voltage UDC=100 V was used across the narrow gaps at the resonator arm end-points. Electrical contacts to the structure were made using wire bonding to square-shaped contact pads with side length 100 μm. Each pad was estimated to create a parasitic capacitance Cpad≈0.3 pF to the substrate through the 1 μm thick SiO2 layer. The feed-through via the pad capacitances was removed by grounding the substrate.The resonance current was detected using an amplifier block with a low-noise junction field-effect transistor (JFET) Philips BF545B as the first stage. The JFET was operated in a common-source configuration using a drain resistor Rd=500 Ω. The input capacitance was estimated as Cin≈4.0 pF from measurements using known capacitors to replace the resonator. Electrical dissipations remained insignificant in comparison with the mechanical energy losses (no Q-loading) due to the capacitive detection. The transmission data were recorded using a standard network analyzer (HP 4396B). The resonator was placed in a vacuum chamber (p<10−2
mbar) resulting in negligible air damping.To simplify the analysis, a resonator model was developed. The continuum approximation for the longitudinal standing wave results in the mode shape for the vibration amplitude where z indicates the position along the resonator arm (z ∈ [0, L]) and X is the vibration amplitude of the resonator arm endpoint (z=L). By integration of the vibrational system can be cast into a form of one degree of freedom X governed by the equation of motion (near the resonance and at small vibration amplitudes):where k is the effective spring constant, m the effective mass, γ the damping coefficient given by the resonance quality factor as , and F(t) is the driving force. Based on the effective vibration parameters k and m are given as The damped mechanical mass–spring system corresponding to . The mass–spring system is connected to a capacitive transducer creating the driving forcedenotes the transducer working capacitance. We identify as the electromechanical coupling coefficient, resulting in the relation between electrical current and mechanical velocity in the transducer, an electrical equivalent circuit for the electromechanical resonator can be derived and is shown , the measured transmission response can be conveniently reproduced using e.g. a circuit simulation program. In the present study we have used the APLAC software and the related micromechanical component library; further details of the simulation method can be found in shows the measured transmission signal |S21| at a small excitation level UAC=22 mV. The mechanical resonance appears at fr=11.7481 MHz. We find this frequency to be in good agreement with : assuming L=180 μm and ρ=2330 kg/m3, the measured resonance frequency fr is obtained when Y=166.7 GPa. This value for Y is 1.3% smaller than the literature value for Young’s modulus Y=168.9 GPa For further precision, we have investigated the effect of the anchoring bridge. Using finite element method (FEM) simulations the bridge width wa=8 μm was found to increase the resonance frequency by 30 kHz (0.25%) as compared with zero bridge width. This corresponds to shortening ΔL=−0.45 μm in acoustic length. Assuming L=179.55 μm increases the material softening shift in Y to 1.8% in comparison with the literature value for pure silicon We also note that the measured resonance frequency is slightly affected by the electrical spring effect of the dc-biased nonlinear transducer (see Using the simulation model, the measured resonator response in could be best reproduced using the values listed in is exceptionally high for a micro-resonator. We associate this being due to two principal factors: (i) the energy leakage to the anchoring areas is small since the central support bridge remains nearly motionless due to balanced operation mode, (ii) the surface-induced losses play a less significant role in the bulk mode than in flexural mode; the surface-area-to-volume ratio of the bulk mode resonator is typically at least an order of magnitude smaller than for a flexural mode resonator of the same eigen frequency.In agreement with the empirical scaling law Qfr≈constant The anti-phase vibration mode of the two resonator arms is very crucial for low-loss resonator operation: we have fabricated and measured a similar single-arm version of the resonator, where a single beam is directly connected to an anchor pad. The measured quality factor was only Q≈100. This indicates that in the dual-arm configuration the sound waves travelling in opposite directions create a nearly perfect boundary condition when meeting at the central support point. Obviously, any fabrication asymmetry between the resonator arms may destroy this operation mode. However, in the several measured resonator samples the parasitic vibration modes due to asymmetry have remained negligible. was obtained using an effective value of d=1.02 μm for the transducer gap, which corresponds to a 2% widening of the 1 μm nominal gap width during the DRIE process. The corresponding transducer gap capacitance was Cw=0.69 fF. In parallel with the resonator a feed-through parasitic capacitance was found with value Cthru=5.5 fF (As the vibration amplitude at resonance was raised, the resonance peak became asymmetric and was shifted towards lower frequency due to nonlinear behavior An important observation is that the measured hysteresis amplitude corresponds to a stretching of the resonator arm Xc/L≈5×10−4, which remains well below the typically measured fracture limit X/L≈10−3 to 10−2 for bulk Si To investigate the resonator’s ability to produce sustained high spectral purity electrical signal, a prototype oscillator circuit was constructed. We use a configuration , where the positive feedback is simply obtained by amplifying the resonator output signal and inserting the result in proper phase into the resonator input. The gain and phase conditions for the closed loop were set for oscillations to start at UDC=130 V from thermal fluctuations.The oscillator output spectrum recorded using a spectrum analyzer from the buffer amplifier is shown in . The narrow oscillation peak demonstrates the successful oscillator operation. The oscillation amplitude was limited by the resonator nonlinearity. Determined from the measured output carrier amplitude, the mechanical vibration amplitude was X≈50 nm (X/d≈5%).Since the oscillator spectral purity was beyond the dynamic range of a typical spectrum analyzer, especially at small frequency offsets from carrier, we used the standard heterodyne setup shows the recorded single sideband phase noise spectrum Lf. The oscillator is seen to exhibit Lf≈−115 dBc/Hz at f=1 kHz offset from the carrier and a noise-floor of Lf≈−120 dBc/Hz. The phase noise spectrum shows significant improvement over other existing silicon micromechanical oscillators To calculate the oscillator phase noise, let us derive the signal-to-noise (S/N) ratio for the unclosed feedback loop. Based on the noise power spectral density at the JFET input is, S(in2)=(ωCg)2kBT/gm represents the charge fluctuations in the JFET channel leaking through the gate capacitance Cg is the motional impedance of the resonator and . The resonator noise is assumed to be dominated by the mechanical dissipations and represented by the Johnson–Nyquist noise When the oscillator feedback is turned on, the calculated noise transforms into phase fluctuations according to Leeson’s equation where Lf0=S(νn)/2 is the single sideband phase noise , it is assumed that the noise is white and that oscillator operation remains linear.Let us evaluate numerically the phase noise according to
kΩ (at UDC=130 V, for the parallel resonator arms combined), and rapidly increases as a function of frequency offset as shown in is thus far from the perfect noise-matching situation indeed clearly reveal the dominance of the amplifier noise: near fr the combined noise from the two JFET noise sources is . For the JFET noise we have assumed gm=4 mS and Cg=4.0 pF.At the noise-matching condition, the very low amplifier noise temperature (typically Ta<10 K ), the exact noise-matching optimum can be reached in principle only at a single frequency offset. The observed signal level at the JFET input was νs=ηx0/Cin=1.75 mV which results into Lf0=−120 dBc/Hz at large offsets from the resonance frequency. . Comparison between the calculated phase noise reveals that the near-carrier noise spectrum was strongly affected by the nonlinear operation point (the oscillation amplitude was limited by the resonator nonlinearity). The nonlinearity induced mixing of low-frequency 1/f noise, which can be seen in the measured noise spectrum below f≈1 kHz as 1/f3 decay rate characteristic of aliased noise. The measured noise-floor in is reached only above f=1 kHz, which clearly larger than the corner-frequency f=fr/2Q=65 Hz predicted by Leeson’s equation. An optimal near-carrier noise performance of the oscillator could be achieved by using a automatic gain control (AGC) electronics to prevent the resonator entering the nonlinear region.The decreasing resonator size will ultimately limit the performance as the intrinsic noise mechanisms (e.g. due to thermal fluctuations) will start to dominate To remedy this coupling problem, the amplifier can be reduced in size or the capacitive transducer can be made more efficient. An on-chip-integrated amplifier would allow the use of significantly smaller JFET devices than the discrete component utilized here. Integration would simultaneously allow a significant reduction of the parasitic pad capacitancies Cpad which otherwise set a lower limit for the amplifier input capacitance Cin (). The capacitive transducer properties can be improved either by increasing the bias voltage or by narrowing the gap. For example, a straightforward method to improve the present oscillator is to reduce the transducer gap to d≈0.5 μm In order to analyze the limitations set by the various nonlinearities, let us expand the mechanical force in Here, km2 and km3 represent the second- and third-order nonlinearity in the mechanical restoring force andare the electrostatic terms induced by the nonlinear transducer. The critical vibration amplitudes at hysteresis limit due to second- and third-order nonlinearities can be approximated by , respectively. Let us first consider the limiting role of capacitive nonlinearity. Inserting the second- and third-order coefficients ke2 and ke3 given in is the characteristic voltage which corresponds to approximately twice the pull-in voltage Upi. Since in practice we always have UDC<Upi, it follows that Xc(ke3)<Xc(ke2), i.e. the dominant criterion in capacitive nonlinearity is set by the third-order term. This result was verified by the simulation model. shows the critical amplitude Xc(ke3) for several bias voltages UDC as a function of transducer gap d. The horizontal solid line in corresponds to the observed critical vibration amplitude set by the mechanical nonlinearity Xcm≈90 nm. We indeed see that the capacitive nonlinearity starts to dominate at d=0.5 μm if the bias voltage is kept above UDC≈80 V. we show the resonator output current ic at the critical vibration amplitudes Xc. When the vibration amplitude is limited by the capacitive nonlinearity ke3, the output current ic becomes independent of UDC and . Correspondingly, when the vibration amplitude limit remains constant Xcm set by the mechanical nonlinearity, the electromechanical coupling set by UDC and d will determine the output current ic(Xcm). It is essential to note that in the region governed by capacitive nonlinearity, the dramatic decrease is seen Xc(ke3) in is replaced by the much weaker decay of ic due to the simultaneous increase in electromechanical coupling constantWe finally note the conflicting role of the quality factor in the optimization of oscillator phase noise. The nonlinearity-induced limit for vibration amplitude depends on quality factor as . Thus, the available signal power from the resonator is inversely proportional to the Q-value. Thus, while close to the carrier an increasing Q-value still improves the noise performance, at large frequency offsets the oscillator noise-floor in fact deteriorates due to decrease in resonator energy. This scaling law for oscillator noise dependence on Q-value is illustrated in . The oscillator noise is assumed to follow and 1000, Lf0 is then scaled according to the 1/Q dependence of signal power.In this paper we have shown that using the bulk acoustic vibration mode it is feasible to construct a micromechanical resonator at 10 MHz frequency range with quality factors exceeding 100 000. We have further demonstrated that the resonator mechanical motion can be effectively transformed into an electrical signal via capacitive coupling using a high bias voltage across a narrow gap and a low-noise preamplifier. As a proof, an operational high spectral purity oscillator has been constructed and analyzed to have a phase noise Lf≈−115 dBc/Hz at f=1 kHz offset from the 12 MHz carrier. This noise behavior is clearly superior to other reported micromechanical oscillators, but remains still inferior to commercial macrosize quartz oscillators capable of producing Lf≈−150 dBc/Hz at f=1 kHz offset. Detailed analysis of our prototype oscillator shows that its performance is dominated by amplifier noise. This illustrates the characteristic coupling problem to high mechanical impedance encountered in micromechanical systems; the performance is typically limited by the unoptimal amplifier interface before facing the actual intrinsic noise mechanisms which become emphasized as the size of the components is reduced T. Mattila received his MSc and Dr. Tech. degrees from the Department of Technical Physics at Helsinki University of Technology in 1994 and 1997, respectively. Since 1999 he has been working as a senior research scientist at VTT. His current research interests concentrate on micromechanical rf-devices.J. Kiihamäki received his MSc degree in electrical engineering in 1988 and his Lic. Tech. degree in 1992 from Helsinki University of Technology. Since 1988 he has been research scientist at VTT. He currently develops fabrication processes for micromechanical devices on SOI in the micromechanics group at VTT Microelectronics.T. Lamminmäki received his MSc degree from the Department of Electrical and Communication Engineering at Helsinki University of Technology in 1999. Lamminmäki worked as a research scientist at Metrology Research Institute, Helsinki University of Technology during 1999–2002. He is currently with Future Electronics.O. Jaakkola has worked at VTT since 1990 and he studies technical physics at Helsinki University of Technology. He has worked on subjects such as measurement electronics, rf-technology and dynamical systems.P. Rantakari was accepted as an undergraduate student at Helsinki University of Technology in 1997. Since 2000, he has been working as a research assistant at Metrology Research Institute of Helsinki University of Technology. He is currently working on rf-micromechanics and microsystems.A. Oja received his Dr. Tech. degree in 1988 from the Low Temperature Laboratory of the Helsinki University of Technology. Before joining VTT at 1995 he investigated nuclear magnetism at nanokelvin temperatures. He is the leader of MEMS Sensors group at VTT Microsensing and research professor since 2000. His current research interests include several microelectromechanical sensors and devices: rf-resonators, high-precision MEMS, ultrasound sensors, and magnetic MEMS.H. Seppä received his MSc, Lic. Tech., and Dr. Tech. degrees in technology from Helsinki University of Technology in 1977, 1979 and 1989, respectively. From 1976 to 1979, he was an assistant at the Helsinki University of Technology, working in the area of electrical metrology. He joined the Technical Research Centre of Finland (VTT) in 1979 and since 1989 he has been employed there as a research professor. In 1994, he was appointed head of the measurement technology field at VTT Automation, and in 1996–1998, he acted as research director at VTT Automation. He has done research work on electrical metrology, in general, and on superconducting devices for measurement applications, in particular. He is doing research on dc-SQUIDs, quantized Hall effect, SET-devices, rf-instruments and microelectromechanical devices.H. Kattelus received his MSc and Dr. Tech. degrees from the Department of Electrical Engineering at Helsinki University of Technology in 1980 and 1988, respectively. Since 1979 he has been working at VTT developing thin-film processes and devices for various applications. Presently he is the leader of the micromechanics group at VTT Microelectronics and a research professor in microsystems technology.I. Tittonen received his MSc and Dr. Tech. degrees from the Department of Technical Physics at Helsinki University of Technology (HUT) in 1988 and 1992, respectively. During the years 1995–1997, he worked as the Alexander von Humboldt stipendiat at the University of Konstanz, Germany. Then he joined the Metrology Research Institute of Helsinki University of Technology as the senior assistant. Since the beginning of the year 2001 Dr. Tittonen has been appointed as the Professor of Physics at HUT. His current research interests are microsystems and quantum and atom optics.The relative influence of lithology and weathering in shaping shore platforms along the coastline of the Gulf of La Spezia (NW Italy) as revealed by rock strengthAlong the rock coasts of the Gulf of La Spezia, which are characterised by a Mediterranean microtidal environment, a limited number of small rock platforms are scattered, constrained in elevation within 5 m above present-day sea level. This work deals with a number of these rock platforms, formed in different rock types (one in limestone and two in dolomite), that show differences in their morphology. This paper aims to provide a quantitative examination of why there are morphological differences between platforms in this region. To achieve this purpose, factors controlling platform morphology and the processes acting on them are investigated through a comparative analysis of rock strength. Rebound values, obtained testing rock surfaces with the Schmidt hammer, were compared between different platforms and between different sectors of the same platform. Each platform was subdivided into two parts based on visual difference in rock surface colour, characterised by differences in occurrence of weathering microforms and bioerosive agents. Rebound values in the lower part of the platforms proved to be lower than in the upper part, providing quantitative assessment of the occurrence of weathering acting to different extents in the upper and lower part of the shore platforms (weathering degraded rock strength in the lower part by about 15%). It was demonstrated that on the upper part of platforms, displaying moderate evidence of physical and biological weathering, lithology significantly influences the rock strength. On the portion of platforms closer to sea level, instead, differential exposure histories of the same rock type in the same environmental setting can yield statistically significant variations in rock strength values. Thus, it is clear that in the lower part of the investigated platforms, the degree of weathering has strong bearing on rock strength, and that variations in rock strength are not solely due to lithology.According to the results of this work, experimental values of rock strength of platforms in the study area depend both on the rock type and on physical weathering due to frequently repeated wetting and drying and bioerosion. Lithology is then an important factor controlling platform shape and weathering is an important process operating on them.The coast of Liguria (north-western Italy) is mostly rocky, only interrupted by minor coastal plains or by natural and/or anthropic pocket beaches. The rock coast is principally shaped by landslide-dominated slopes and plunging cliffs but also complex slope profiles are not infrequent. These are characterised by narrow seaward-sloping or sub-horizontal shore platforms, benches and/or seaward-sloping ramps backed by cliffs or steep slopes, often associated with marine terraces in a staircase. , referring to the coast of western Liguria between Varazze and Cogoleto, reported remains of raised shore platforms at 1–1.5 m a.s.l. (above sea level) and at 5–8 m a.s.l. backed by cliffs or steep slopes connecting them to higher marine terraces between 70 and 105 m a.s.l. Although the terraces lack deposits suitable for radiometric dating, these authors attributed the lower platforms to the upper Pleistocene, considering them as being inherited coastal features. Also in Eastern Liguria, recognised two orders of rock platforms, the inner margins of which cluster around altitudes of 5 and 12.5 m a.s.l.At the easternmost end of Liguria, along the carbonatic coast of the Gulf of La Spezia (), complex slope profiles include seaward-sloping shore platforms characterised by a near-vertical drop at the outer edge and, generally, by a sharp junction between the platform and a backing cliff or a steep vegetated slope. Referring to the eastern coast of the Gulf ( attempted to typify these complex coastal slopes. The author discerned among them four different types that she labelled with the capital letters from A to D, though intending neither to replace nor to improve the classification. Arozarena Llopis' typologies are basically distinguished considering the different slopes (angles) of the seaward-sloping rock surfaces and the control exerted on these by the rock structure and tectonic features, being the rock surfaces of types A and C developed mainly following the structural/tectonic features (discontinuities). The author suggested that these coastal landforms are being progressively dismantled by current physical and chemical processes due to their proximity to sea level, but that their major morphological features are inherited from past wave climate and sea level conditions. In Arozarena Llopis' types of coastal profile are summarised and tentatively related to Therefore these shore platforms/rock marine terraces, described by different authors, can be considered a recurrent landform along the littoral of Liguria. Moreover, a real advance in understanding these shore platforms requires a quantitative approach to their study, the results of which could be relevant to the wider rock coast scientific community.This paper investigates the role of lithology and weathering in determining rock strength as assessed by the Schmidt hammer on shore platforms of the La Spezia Gulf coastline. In this work those platforms ranging in altitude from few decimetres up to 5–6 m a.s.l. are considered shore platforms. In fact they are constrained in the shore, i.e. between the water's edge at low tide and the upper limit of effective wave action (). The rock platforms exceeding these heights are raised and therefore they must be interpreted as marine terraces, the counterparts of shore platforms (The debate concerning the relative roles of marine and sub-aerial processes in the development of shore platforms is still open () even if the literature dealing with the controlling factors and processes guiding the development of platforms in all tidal environments is quite abundant (). However, few case studies have dealt with shore platforms developed in coastal environments with very small tidal range, only quantifiable in some decimetres, as in the case of most of the coasts of Mediterranean (). This paper aims to provide a contribution to the literature on the shore platforms typical of the Mediterranean coast, an area where there is a need for quantitative studies on rock coasts evolution. The study sites selected for this work, which are described in detail in ). In particular this work focuses on whether it is possible to differentiate tracts of their long profiles with different degrees of weathering and to determine to what extent differences in hardness among platforms are due to lithology rather than to processes acting on them. The point is whether the Schmidt hammer test can be used, as it is used for shore platforms in open-ocean coasts (where they are typical, ) to investigate the morphologies of these rock platforms and the processes responsible for their shaping. The paper will characterize these landforms by comparing rock strength: (a) between differently weathered zones across shore platforms; (b) between sites with different or similar weathering histories shaped in the same and in different lithologies. The Schmidt hammer provides a proxy for compressive strength, which measures rock hardness (). In geomorphological research, it has been used to compare rock hardness and to highlight differences in weathering among exposed rock surfaces (). In rock coast research, in particular, the Schmidt hammer test has been used to investigate the processes operating on shore platforms (weathering, abrasion, wave erosion).Rebound (R) values derived from Schmidt hammer measurements on exposed rock surfaces are often negatively correlated to the degree of weathering to which the surface has been exposed. highlighted differences in the degree of weathering on upper and lower portions of shore platforms in Galicia (Spain) to demonstrate that those features are partly inherited, and used rebound values and the characteristics of discontinuities to assess the relative resistance of the rocks around the coast of Lord Howe Island (Southwest Pacific). In order to evaluate the control that rock resistance has on coastal morphology, considering also the possibility of landform inheritance, they compared the degree of weathering of different platform levels to assign them relative ages. determined the degree of reduction in rock strength due to weathering, comparing rebound values between weathered and unweathered exposed rock surfaces. did not find meaningful variations in hardness between each of the zones identified in long profiles traced normal to the coastline of the Wellington (New Zealand) area (from high tide spray zone to low tide zone).Further applications of the Schmidt hammer deal with the relative role of waves in rock coast landform evolution. used rebound values to calculate the compressive strength of rocks (as representative of rock resisting force), in order to compare the effect of waves' assailing force. identified, in some rock platforms in western Galicia, strips where abrasion occurs instead of weathering. They argued that in the intertidal zone, higher R values indicate abrasion, while lower R values indicate weathering.The Schmidt hammer has also been used to test whether differences in morphology reflect different rock strengths. The role of rock hardness in the development of different coastal landforms has been investigated by , who tested how rock strength influences the shore platform gradient and stack occurrence in New Brunswick (Canada); and by , who provided a semi-quantitative estimate of rock resistance (combining R with joint density) to test whether, along the coast of Lord Howe Island, plunging cliffs are developed where rocks are harder and shore platforms where they are more erodible. Finally, in south-eastern Australia, investigated the relationship between rock strength and shore platform elevation and highlighted structure and resistance of the rock type as primary determinants of platform morphology. In recent papers Schmidt hammer test has become a routine methodology to quantify the overall hardness of shore platforms (e.g. ) forms a deep indentation in the easternmost part of the rocky coast of the Liguria region and it is bordered by two promontories stretching NW–SE. Two of the study sites, Punta/Seno di Treggiano (TEL) and Punta/Seno delle Stelle (SS), are on the eastern promontory of the Gulf, while the third, Cala Piccola (CP), is on the side facing the open-sea of Palmaria Island, located SE of the western promontory (The two promontories belong to the Northern Apennines, a NW–SE-striking fold-and-thrust belt, which originated during the Late Cretaceous to present convergence between the European and African plates (). They also represent the horsts limiting the graben of the Gulf of La Spezia, which belongs to the wider Vara Valley–Magra Valley graben system () developed in the inner side of this sector of the Northern Apennines since the late Miocene–early Pliocene (). The graben is affected by a NW–SE extensional fault system () and also by faults striking NE–SW (transverse system), roughly perpendicular to the previous ones. In addition, the open-sea-facing side of the western promontory is also affected by the NW–SE-striking faults belonging to the fault system responsible for displacing the eastern Ligurian continental shelf (), which was active up to the Upper Pleistocene (The rocks cropping out in the study sites are mainly calcareous or dolomitic, belonging to the greatly tectonized unit of the Tuscan Nappe (Falda Toscana Auctt; ), a sedimentary sequence that, according to , has been affected by at least five deformational phases since the late Oligocene–early Miocene to Pliocene. According to the most recent stratigraphic revision (), the upper Triassic “La Spezia” Fm and the Upper Triassic–Lower Jurassic “Dolomie di Monte Castellana” Fm crop out at the study sites.The coastline of the promontories is characterised by alternating bays and headlands, with a high variable morphology. In fact, the first 15 m above sea level are characterised by cliffs, ramps, stacks, rock ledges and platforms; in addition, gravelly pocket beaches are located inside the embayments (). Landslides are widespread, mainly in the most prominent part of the eastern promontory (). The hillsides of the two promontories are quite steep; they display slope-over-wall profiles, and staircase morphologies are not infrequent (The complicated template provided by the different fault systems and tectonic history has controlled the erosive processes and the evolution of the landforms. Normal faults parallel to the coastline are highlighted by tectonic saddles in the inner part of the headlands. It is not known whether these faults are still active, but the area was affected by vertical movements from the Pliocene to Late Pleistocene and, at least on the continental shelf, up to the Holocene (), guides the morphology of rock surfaces at sea level.The meteorological conditions of the area were derived from the wind gauge of Palmaria Island (44° 02′N; 9° 50′E; 201 m a.s.l.) and from a wave gauge 10 km offshore the Gulf of La Spezia (43° 55′N 9° 49′E). Wind records show a dominance of NE and SW winds. Prevalent incoming waves, however, are from the SW (Libeccio waves). Their paths are then parallel to the coastline at all three study sites. Offshore, waves 0.10 and 1.25 m high are most frequent, with an annual frequency of 60%; larger wave heights, between 1.25 and 4 m, have an annual frequency of only about 10%. Nearshore wave transformations are limited at the study sites, as they are located in the most external part of the Gulf, where the sea bottom is deep until very near the coast, sharply reaching depths of 10–15 m within 200 m from the shore. For this reason, waves seldom break on the rocky coast, generally being reflected instead.The tidal range in the Ligurian Sea is one of the smallest in the world. Predicted astronomical tide has a maximum range of 40 cm (spring tide). Sea level recorded by the La Spezia oceanographic observation station (), however, displays aperiodic fluctuations of high magnitude and low frequency. highlight a correlation of positive sea level anomalies with atmospheric pressure reduction and suggest that they are the result of the overlap of astronomical and meteorological components. The contribution of these meteorological surges to short-term sea level change is, in this microtidal environment, more important than the astronomical tide contribution, producing aperiodic fluctuations of ± 50 cm. the main features of the three study sites are summarised.The tract of coastline where the TEL site is set includes two narrow platforms at the sides of the Treggiano Cove (Punta/Seno di Treggiano) separated by a cliffed tract in the central part of the cove (). It was classified as a type D platform by because of its morphology and because evidence was provided of it being an inherited feature. In fact close to its inner edge, at an elevation of about 5 m a.s.l., a beach deposit linked to the last interglacial was found covering the bottom of a cave. The platforms are about 20 m wide, gently dipping towards the sea (10° on average), truncated seawards by a plunging cliff and backed by an inactive cliff/slope that connects them to an upper marine terrace in a staircase (). The topographic profile is irregular due to the intense weathering forms affecting the surface (The SS site comprises three small platforms () in the eastern side of the Stelle Cove (Seno/Punta delle Stelle). ). They are partly surrounded by low lying relics of platforms, now constrained below low tide, from which stacks rise. Their seaward edge is steep, varying from a cliff to a 40°-dipping ramp. These shore platforms are in a staircase with two marine terraces at 15 and 28 m a.s.l. () is in the innermost part of the bay named Cala Piccola, along the western coast of Palmaria Island, where a shore platform (). The platform is 20 m wide in its central part and ranges in altitude between 0.5 m (outer margin) and 4 m (inner margin) a.s.l. (); at the seaward edge the platform drops into the sea in the form of a steep cliff and on the opposite side it is backed by a very steep scarp (palaeocliff) mantled by cemented continental stratified scree not younger than MIS 3 (). The platform should, therefore, be considered an inherited landform. The surface of the platform is rather smooth and gently dipping seaward (The rock strength on shore platforms has been measured at the three sites Cala Piccola (CP), Punta/Seno delle Stelle (SS) and Punta/Seno di Treggiano (TEL) (). At each study site, a number of sampling stations were established (): each station coincides with a single shore platform. A detailed morphological analysis of each shore platform was carried out, in particular highlighting its form and dimensions. At CP only one sampling station was established because in this site only one continuous shore platform is present.Each sampling station has been subdivided in two substations, A (high) and B (low) (), differentiated considering the visible difference of colour of the rock surface: in fact the rock surfaces of the B substation are darker than those of the A substation, because of the presence of cyanophytes.The well-defined boundary that separates each A and B substations is located within the supratidal zone and corresponds to the upper limit of the GR zone (sensu ), separating the portion of rock surface characterised by the presence of endolithic and epilithic cyanophytes, from that free of them. In his classical study about bioerosion on limestone coasts of the Mediterranean, has subdivided the inter- and supratidal areas into six zones. Among these the four zones from yellow-brown (corresponding to GB zone sensu ) are characterised by the appearance of endolithic and epilithic cyanophytes and by the presence of cavities and pittings with different extent and degree of development. Depending on the degree of exposure of the coast, these zones may rise up well above the mean high level of spring tide, thus characterizing the supratidal zone. Therefore in this study we intend the A and B substations to be completely constrained within the supratidal zone.The altitude a.s.l. of the boundary between the high and low substations varies between the studied sites and also within each site from one sampling station to another, ranging throughout the area from 0.50 m to 4 m a.s.l.According to their position, at each station the two substations show different weathering meso- and microforms that reflect different processes affecting the rock. The B (low) substation is constrained, at each site, between the spring high tide level and the boundary with the higher A (high) substation. The latter extends on the platform up to the platform–cliff junction.At each sampling station we assessed the different types of weathering for each substation by analysing weathering meso- and microforms and recognising bioerosive agents, the features and occurrences of which has been dealt with. Different types of weathering forms (rock pools) were distinguished and classified according to The Schmidt hammer, consisting of a spring-loaded piston (of a steel mass), allows in situ measurements of rock hardness. When the hammer is pressed orthogonally against a surface of rock, the piston is automatically released onto the plunger and the distance travelled by the piston after rebound (expressed as a percentage of the initial extension of the key-spring) is called the rebound value (R), which is considered to be an index of rock hardness that is related to the uniaxial compressive strength () and which contributes to rock resistance (i.e., the rebound value is a measure of the hardness of the rock surface; ). For this study, an N-type Schmidt hammer was used; this type of hammer, which is extensively used for testing concrete because of its sturdiness, works on materials with uniaxial compressive strengths ranging from 20 to 250 MPa. measurements were taken with the hammer held perpendicular to the rock surface. Measurements were performed at least 6 cm from fractures and edges of the rocks and from pools, and as much as possible on surfaces free of dirt.At each substation measurements of rebound values were taken in correspondence with sampling points (), the number of which was not fixed but increased as a function of the width and variability in weathering of the surface of substation itself. Sampling points were randomly selected, although a gap of at least 50 cm has been left from one point to another and only impurity-free surfaces are considered. In fact, weathering forms were tested but attention was paid to avoid salt crusts and chips of rock almost detached from the outcrop.At each sampling point, the dip of the device was measured so as to correct R values when the hammer was non horizontal, using the correction curves provided by the Schmidt Hammer manufacturer. Thirty-five readings were taken at any sampling point randomly distributed across a matrix approximately 10 × 10 cm in area. Repeated measurements enabled us, through statistical treatment, to obtain reliable hardness indices, regardless of micro-scale variability due to the presence of different mineral grains in the rock mass. In order to reject abnormal values, at each sampling point the 10 lowest values were removed. The device generally gives errors for the lowest values, but never for the highest, because grain crushing and plunger displacement can cause values appreciably short of the mean. In addition, we applied the Chauvenet's criterion to identify too low and insignificant R values. Attention was paid during field work to avoid inaccurate readings that would affect the entire dataset.Statistical treatment was carried out as follows: for each sampling point, the mean and standard deviation of R values were calculated after Chauvenet's criterion had been applied. In most cases, however, all 25 values proved to be significant.The average rebound value (Rmean) and standard deviation (Std dev) was calculated for each investigated substation (). In order to highlight the range of rebound values within each sampling station, the minimum (Rmin) and maximum (Rmax) collected rebound values are also reported.A t-test (95% confidence interval) was performed to test the hypothesis that differences between groups of measurements were significant. In particular, the t-test has been applied to test the difference between R values of high and low substations for each sampling station (substationA vs substationB, ). This allowed us to verify the effective difference in rock strength between the two parts (A and B) of the shore platforms surface.The t-test has also been applied to the R values obtained from each substation in order to test whether there are statistically significant differences 1) between the A substation of TEL vs that of CP and between A substation of SS vs that of CP; moreover the test was applied 2) between the B substation of TEL vs that of CP and between B substation of SS vs that of CP (). This was done to compare case by case two platform sectors in which either the lithology or degree of weathering are different between the two sectors; moreover two platform sectors in which both parameters change were compared in order to test the method efficiency. This comparison proved useful to test which is the factor, lithology or weathering, that prevails in determining the rock strength in homologous sectors of different shore platforms, considering the mutual interaction between them.The TEL site was subdivided into two sampling stations (TEL1 and TEL3, ); differences in the slope long-profile can be recognised from one station to the next. In fact, at TEL1 the outer edge subtends a cliff that plunges directly into the sea with a dip of around 50–60°. At TEL 3 the cliff is vertical and is separated from the sea by a narrow rock bench, at most 1.5 m elevated above sea level.The morphological differences between stations cause the shift in altitude of the boundary between the A and B substations along the coastline. In fact, where the cliff is exposed directly to the waves (TEL1), the boundary between the two substations rises along the cliff. The boundary is at about 4 m a.s.l. at TEL1, coinciding approximately with the knickpoint between the platform outer margin and the seaward cliff. At TEL3, where the rock bench is developed, the lower substation practically coincides with the bench itself (The SS site displays three platforms (sampling stations SS1, SS3 and SS5, ), differing mostly with regard to their aspect and position along the coastline; these control the exposure of the platforms to wave attack and therefore the height of the boundary between the A and B substations. SS1 has one side sheltered and one exposed to waves (). Its outer margin is represented by a cliff that is lower (0.20 m) in the sheltered side and higher (1 m) in the exposed one. The height of the boundary between the A and B substations varies from 0.60 to 1.20 m a.s.l., showing a close relation with the height of the outer cliff: where this last has its minimum height, the boundary between the high and low substation rises up on the surface of the platform over a width of about 6 m.In SS3 the rock platform is formed on the rock strata, which are overlapped by a cataclastic layer marking the detachment fault. suggested that the detachment fault has its best morphological expression just in the place where station SS3 was set. The platform ends seawards with a near-vertical cliff 0.50 to 1.30 m a.s.l., and the boundary between the A and B substations is located on the surface of the platform at a mean height of 1.20–1.30 m a.s.l. (The SS5 sampling station was set on a headland. The original platform was evidently eroded by wave action and linked to the presence of a fault system pertaining to the NW–SE system, so that it was subdivided into islets close to the coastline (). The boundary between the high and low substation varies in height, rising up on the platform surface where it is at a lower altitude (in its south-eastern part, where it is 0.50 ± 0.80 m a.s.l.).At site CP a single sampling station was set, subdivided into two sampling substations. The boundary between the A and B substations rises up on the platform surface in connection with its south-eastern portion, where the outer cliff displays its minimum height; elsewhere, it practically coincides with the top of the cliff along the outer rim of the platform itself ( the rebound values obtained for each site are presented. There is not a remarkable difference in rebound values among the tested substations. Rmean values are constrained within the 53 (TEL1A)–36 (SS1B) range.The sampling stations of the TEL site, where the rock type is limestone, show in general mean rebound values slightly higher than those of the other sites. This is true particularly for the A substation that display the highest mean rebound values of the three studied sites. In TEL3, though, the Rmean values are, both for the A and B substations, lower than those recorded at the respective sites at station TEL1. In the SS site the mean rebound values are very similar between the substations, clustering around 38. The rebound values derive from quite broad datasets, as testified by the average difference between Rmin and Rmax values indicated for each substation; the standard deviation, though, suggests that dispersion of the values at each substation is acceptable, considering that local variations in rebound are due to the crushing of loose grains or a slight displacement of the plunger.If we consider the average rebound value (Rmean) in each sampling station the rock surface appears less hard in the B substation than in the higher A one. Difference in strength between the rock surfaces of A and B substations is remarkable in all sampling stations, apart from SS3 at Punta/Seno delle Stelle and CP at Cala Piccola. In SS3 the two mean rebound values are very close, highlighting that the strength of the rock in the upper and lower portion of the platform is similar. Also at CP the rebound values of the two substations are very similar; considering that values recorded in the single sampling points are quite scattered, the two substation datasets almost totally overlap.Differences in R values between the two different substations for each station proved to be statistically significant, as shown by the results of the t-test (). This was true for all the sampling stations, except for the already mentioned SS3 and CP. In fact also the statistical treatment of the data highlights that, in these cases, the A and B substations are not differentiated in R values, although their mean values were lower in the lower substations than in the upper ones.A comparison between the A substation of different sites () highlights that the R values of all sampling points on TEL site are statistically different from those of site CP, while the R values of SS are not statistically different from those of CP. If we compare the lower substations (), the situation is opposite, with R values differentiating SS from CP, but not TEL from CP.Differences in medium and small-sized weathering morphologies affecting the shore platform surfaces are reported for each site. These are considered to be very important process indicators, and they will be used to explain differences in measured rebound values.At site TEL, the rock platforms display a rugged surface; the fractures affecting the rock represent the preferred way for physical and chemical weathering processes to attack the platforms, promoting the development of an irregular topographic profile (). Locally, this effect is increased by the presence of narrow and deep concavities that are karstic in origin. Close to the platform-backing slope junctions, the bottoms of these holes are covered by red clay due to carbonate dissolution; moving seawards, the holes change into typical Mediterranean coastal weathering forms (rock pools, sensu ). These include rounded to elongated potholes, partly filled with the pebbles responsible for their scouring, up to 6 m wide and 2.5 m deep, and shallow pans up to 2 m long, 70 cm wide and 50 cm deep, whose bottoms appear covered by millimetric to centimetric salt crusts. In addition, inside the B substation, the rock is characterised by well-developed honeycombs that affect the entire surface and, locally, by narrow furrows elongated parallel to the rock surface dip.At site SS, both in their upper and lower part, the surfaces of platforms show a rugged topographic profile characterised by the presence of prevalent pan-shaped rock pools and potholes that develop preferentially along the discontinuities of the rock; these can reach 4 m wide and 1.5 m deep. In some cases, they reshape the bottom of wide and deep cavities originally due to solution processes of the carbonate (karstic origin). In the lower part of the platforms (B substation) a patchy cover of barnacles affects the rock surface; where the rock is exposed it shows extensive thick, shallow honeycombs.The platform surface at site CP looks fresh and displays few weathering microforms; these are present only in a restricted belt developed immediately at the back of its outer edge. In this narrow portion of the surface, few shallow rock pools, flutings and honeycombs appear.In all the investigated platforms the spatial distribution of littoral fauna and flora shows a pattern of superimposed parallel belts. Three biological zones can be recognised that are correlated with bioerosive and/or bioconstructive actions. They comply with the general scheme of vertical zonation for limestone shorelines of the western Mediterranean, as proposed by , starting from the international scheme proposed by and used around the Mediterranean basin.a sublittoral zone ranging from mean sea level down to the seaward cliff bottom (about − 10 m a.s.l.), densely populated by brown algae with scattered sea urchins;a midlittoral zone, submerged at close intervals by waves and, to a lesser degree, by tides, is characterised by important erosive agents. A typical vertical succession of biota can be observed that includes, from sea level upwards, a narrow belt dominated by grazing organisms such as limpets (Patella spp.), followed by a well-developed barnacle belt (Chthamalus spp.) and a continuous Cyanobacteria belt up to the reach of sea spray; these populations are indicative of marked bioerosion;a supralittoral zone, where the biomass is progressively less dense with increasing elevation and mainly represented by the endolithic Cyanobacteria highlighted by the dark colour of the rock surface () and by Verrucaria, a tar-like patina lichen typical of the upper portion of the supralittoral. In this zone abiological weathering agents are prominent.The width of the supralittoral zone, a belt dominated by erosive and destructive actions, is controlled by the exposure of the coast to storms and incoming waves. As already mentioned above, the upper limit of the Cyanobacteria, and also Verrucaria, rises up along the shore at the same rate as the zone splashed by the surf (). Also the positions and extents of barnacle belts (Chthamalus spp.) on the shore are generally controlled by, besides the tidal range, the grade of exposure of the coast to the incoming waves, which, in a highly microtidal environment such as that of the studied sites, can be considered the main factor responsible for the vertical distribution of the barnacles. The more the coast is exposed to waves, the wider is the distribution of Chthamalus spp. along the vertical gradient of the shore, as stated by for the area of the Gulf of Genoa just west of the Gulf of La Spezia (), so that Chthamalus can be considered typical of the lower part of the supralittoralThis situation is common to the entire study area but differences in the distribution of bioerosive agents are reported for each investigated site. They can partly be considered responsible for differences in measured rebound values. In the detail at the TEL site barnacles are concentrated in holes of the lower portion of the shore platform surface associated with cyanophytes and black lichens that cover the platform up to the B substation upper limit. At SS, barnacles form a well-defined belt and rise up-shore mainly in the most exposed tract of the headland. Lichens and cyanophytes patinas are in spots along the shore within the first 10–20 cm above the barnacle belt. At CP, the barnacle belt is present mainly on the seaward cliff face and the darker portions of the platform with cyanophytes and lichens are restricted, not exceeding 1–1.5 m inland from the shore platform outer edge.The rock in the upper part of platforms is harder than in the lower ones. Indeed the A and B substations, already distinguishable because the different colour of their rock surfaces, can also be differentiated by comparing the rebound values. In all cases higher R values are returned from upper substations than from lower ones (), and at all stations, except for CP and SS3, these differences are statistically significant (The differences in R values can be related to the highlighted change in the type and occurrence of weathering morphologies that, in the lower part, are more frequent, more intense and supported by the effects of bioerosion. In particular, at both stations of site TEL, honeycombs and small tafoni affect only the lower substations, where the rock is also colonised by bioerosive organisms. At site SS, in the B substation honeycombs affect the rock in the gaps between the resistant cover of barnacles.It is possible to calculate the reduction in strength caused by weathering in the lower parts of the platforms with respect to rock strength in the upper parts (see the last column of ). In all stations, except SS3 and CP, reduction in R is moderate but not negligible. It is beyond the scope of this paper to discriminate the relative contribution of physical and biological weathering processes to overall reduction in strength. Moreover the different cover in organisms and in the occurrence of weathering microforms highlighted between the upper and lower part of platforms account for the evidence that both factors play a role in reducing rock strength.In all but two cases (SS3 and CP) statistical differences in strength values were found between parts of the same platform that were weathered differently. A detailed examination of these two cases supports the general conclusions. The closeness of R mean values for the two substations of site CP is consistent with the moderate difference in the degree and occurrence of weathering forms and biological colonisation between the higher and lower parts of the rock surface. This fact may be interpreted as being due to the sheltered position of the CP rock surface, located in the innermost portion of a bay, which limits the effects of wetting and drying and bioerosive processes very close to the high tide level. Nevertheless, the prolonged permanence of the rock surface beneath the cold climate stratified scree, roughly from 40 000 yr BP (MIS 3) () to 6000 yr BP, when the sea surface then approached the present highstand and started to remove the deposit, could be accounted for. In fact, the fairly recent exposure of the rock surface to erosive agents could have caused the delay in the development of different degrees of weathering forms between the two substations with respect to the other considered rock surfaces.Also station SS3 has similar R values in the A and B substations. This is due to the remarkable coincidence of the rock platform with the plane of the detachment fault; the rock is stressed and strained and therefore much more prone to erosion than in the other stations of the same site. This fact could have caused the homogeneous response of the rock to erosive processes, irrespective of the development of the A and B substations, producing very close R values for both (). Furthermore, in the central part of the cove, a wide submerged platform is developed that shelters the sub-aerial platform from waves, causing a limited development of the B substation, the extent of which is, moreover, not clearly detectable.The comparison among study sites highlights the different roles of lithology and weathering in assessing the rock strength as returned by Schmidt hammer values.In the upper part of the platforms where weathering is moderate, lithology plays a dominant role in influencing rock strength. In fact, the t-test applied to average values from all sampling points in all upper substations of each site (), displaying all the same degree of weathering, shows that R values of the upper part of the TEL platform are statistically different from those of CP-A, whereas values of upper SS, tested with those of CP-A, do not appear statistically different. CP and SS are formed in the same bedrock (dolomite), whereas TEL is not (limestone). This demonstrates that, in the shore platforms of the study site when the degree of weathering is similar, the factor controlling rock strength is lithology.All sampling points in TEL-B and SS-B were tested with those in CP-B (). R values differentiate SS from CP, despite the fact that they are both modelled in the dolomites, but not TEL from CP which have a different bedrock type. This means that in the lower part of the tested platforms, weathering has greater weight than the bedrock type in determining rebound values.The statistical analysis had not differentiated TEL-B and CP-B, that have similar rock strength in the lower part of the platform, notwithstanding that they are modelled in the limestone and dolomite, respectively. Indeed they are modelled in rocks characterised by different strengths (the rock strength of limestone is higher than that of dolomite, as testified by the statistical analysis performed on rebound values for the A substation of these platforms, both moderately weathered), but weathering has occurred to a greater degree at TEL-B and to a minor degree at CP-B, so that the initial differences in strength have become irrelevant and the two platforms have reached statistically similar rock strength values.The different degree of weathering observed at TEL-B and CP-B could be explained in terms of differential exposure histories. In fact both TEL and CP are relict shore platforms, inherited from the last interglacial () thus they should have been exposed to weathering agents for the same timespan, but CP has been sheltered by scree slope deposits until sea level approached its present highstand, so that it had a much shorter exposure history.A longer exposure history combined with a higher original rock strength (TEL) and a shorter exposure history combined with a lower original rock strength result in TEL and CP not being differentiated in terms of rock strength only in the lower part of the platforms, i.e. in the active zone of physical and biological weathering.SS and CP, on the contrary, are not differentiated in terms of rock strength in their upper part because they are both modelled in a dolomite bedrock, but in their lower part the differences in weathering can be observed on their surface. interpreted SS as a structurally controlled shore platform; it is impossible, in this case, to infer the age of this landform and then to speculate on its exposure history. We need therefore to rely on morphologies indicative of weathering that were observed, i.e. on the density of weathering microforms and of biological organisms, to state that it has, in its lower part, a greater degree of weathering than CP-B.It is remarkable that the differentiation we made a priori between the lower and upper part of each platform, based on visual difference in rock surface colour, does indeed represent a partition of the rock of our shore platforms in terms also of the processes acting on them and their efficiency. It is possible to relate preliminarily the weathering conditions of each part of the platforms (upper and lower) to specific parameters. The upper part of the platform, in fact, is less (and differently) weathered. Typical weathering forms are potholes, indicating that sub-aerial weathering processes prevail on it. Repeated inspections suggest that it is frequently affected by sea spray, less often by sea splash and seldom, in case of heavy storms, by wave surf. These events are aperiodic, being dependent on weather conditions. In the upper part of the platform, rock surfaces are therefore subjected to episodic wetting and drying. The presence of some terrestrial vegetation and lichens in patches should also play a role but we did not investigate this, assuming it to be of secondary importance.In the lower part of the platform, instead, pitted surfaces with honeycombs are widespread. Sea spray and splash are not unusual on the platform whenever the sea is not very calm. Furthermore recorded wave heights in the area suggest that the shore platforms closer to sea level are frequently in the surf zone. In fact the combined effect of astronomical and meteorological tides moves sea level upwards aperiodically so that it frequently reaches the platform seaward outer edge even when waves are moderately high. The lower part of the platform being densely populated by barnacles and other endolithic and epilithic organisms, it is likely that the effect of bioerosion plays an important role in reducing rock strength.Future research should be addressed to investigate the relative importance of different processes operating on these platforms and to increase the number of case studies in Liguria and beyond. In fact it could provide a discrimination of those processes (e.g. salt weathering, slaking, and bioerosion) and a quantification of the parameters that govern their occurrence and degree (e.g. number of wetting and drying cycles, frequency of sea spray, and so on). Micro-scale studies of biota combined with Schmidt hammer testing of rock surfaces could provide quantification of the effect of bioerosion in order to compare it to the degree of physical weathering. Moreover a wider number of shore platforms of these types can be found along the coast of Liguria as well as of neighbouring Toscana; study cases should be selected that could provide a wider insight into a type of shore platform that has seldom been brought to the attention of the wider rock coast community.In the Gulf of La Spezia, as already reported for other parts of Mediterranean Sea, narrow, gently seaward-sloping or sub-horizontal carbonate shore platforms are present, ranging in altitude from the high tide level up to 5 m a.s.l.The rock strength of these shore platforms proved to be significantly influenced by lithology, as revealed by Schmidt hammer test, in their less weathered part. Lithology as a controlling factor becomes less significant as rock weathering increases, to the extent that rocks of two different initial strengths have statistically similar R values. In fact differences in strength values were found between rocks of the same type which have been exposed to marine processes for different lengths of time. On the other hand in the active zone of physical weathering and bioerosion, weathering can occur to such a degree that initial lithological differences have little bearing on rock strength.Our result is somewhat contrary to that obtained by on the shore platforms of Galicia, who highlight lower rock strength in the upper rather than the lower strip of the platform. In this latter case, however, both strips are constrained in the intertidal zone; in the lower strip, abrasion occurs and is responsible for weathered rock removal, so that the fresh rock crops out closest to the sea. In the study sites of this work no measuring point was located below the spring tide level, because the tidal range is so small that it is normally lower than the lowest elevation of the investigated platforms. Moreover, the steep gradient of the sea bottom just below the platform's outer edge limits the efficacy of littoral drift as an abrading agent, constraining its action inside potholes.A closer similarity results instead between our data and the findings of , in the sense that these papers stress the role of geological properties in driving erosion processes on shore platforms.According to the results of this work, experimental values of rock strength in the study area depend both on the rock type and on physical weathering due to frequently repeated wetting and drying and bioerosion. The point is how the atypical morphology of our platforms can be explained in terms of processes responsible for their shaping. The classification by , that we took as a startpoint, differentiated C and D platform types in terms of their genesis, being in the first case the platform morphology forced by geological factors (structure) and in the second case due to the current evolution of an inherited landform. The results of this work highlight that the same processes are currently shaping both type C and D platforms. Thus, it seems correct to infer that their morphology depends from genetic factors (i.e. their original morphology), as suggested by The morphology of the La Spezia Gulf platforms is quite different from that of typical shore platforms studied in many parts of the world (coastlines of North Atlantic and Pacific Oceans); in fact it is difficult to identify them completely with either A or B type of or with typical high tide shore platforms. On the contrary they are similar to those described in Mediterranean calcareous coastal areas (). Our results demonstrate that the Schmidt rock test hammer can be used also in the case of narrow Mediterranean shore platforms to investigate the factors controlling their shape and the processes operating on them. Further work should investigate factors responsible for morphological differences between Mediterranean and open-ocean shore platforms with a comparative approach. Differences in lithology and tidal range are probably the most important of these factors but attention should be paid also to wave energy and inheritance.Thermal expansion and thermal mismatch stress relaxation behaviors of SiC whisker reinforced aluminum compositeThe thermal expansion behaviors of SiC whisker reinforced pure aluminum composite cooled down from 580 °C with lower and higher cooling rates were studied in the present research. The results indicated that the thermal expansion behaviors of the composite were affected by the cooling rate greatly. The results of transmission electron microscope observation and X-ray diffraction analysis indicated that the dislocation density and thermal mismatch stress (TMS) in slowly cooled composite were lower than those in fast cooled (water quenched) composite. The analysis suggested that the coefficient of thermal expansion (CTE) was closely related to the change rate of TMS and the dislocation density of matrix of the composite. The changing tendencies of TMS in composites with different microstructures and TMSes on heating were also analyzed.Silicon carbide whisker reinforced aluminum composite (SiCw/Al) exhibits significant improvement in physical and mechanical properties compared with unreinforced aluminum alloys Many researches have been done for the CTE of particulate and long fiber reinforced aluminum composites In the present research, the microstructure and TMS of SiCw/Al composite were changed by different heat treatments, and their effects on the thermal expansion behaviors of the composite were analyzed.The SiCw/Al composite used was fabricated by squeeze casting technique with the whisker volume fraction of 18%, and the matrix was pure aluminum. The microstructure observed in a scanning electron microscope (SEM; Hitachi JEOL S750) showed that the distribution of SiC whisker in the composite was random as shown in Before CTE measurements, the specimens were treated with two techniques, one was annealed at 580 °C for 2 h then cooled to room temperature with a lower cooling rate of 1 °C/min, the other was annealed at 580 °C for 2 h then quenched with water at room temperature. The former and the latter are referred as slowly cooled and quenched composites in the paper. The microstructures of the composites after different heat treatments were investigated by transmission electron microscope (TEM; Philips CM-12) with the operating voltage of 120 kV. The specimens for TEM observation were thinned by an ion milling facility equipped with a cooling stage in order to keep the specimen temperature approaching to room temperature during ion milling. The residual TMSes in the composites subject to different heat treatments were estimated by X-ray diffraction (XRD) method using thin plate specimens mounted on a Philips X’Pert diffractometer
mm for CTE measurement were cut from the cylindrical composite ingot. is a schematic diagram showing the location of specimens within the ingot. The CTE curves were measured using a Netzsch DIL402C Dilatometer with the heating rate of 2.5 and 8 °C/min, respectively.The TEM images of microstructures of the composite subjected to different heat treatments are shown in . All images were taken with the operation vector of . It can be found that the dislocation densities and states in the two specimens are quite different. In the matrix of slowly cooled specimen, the dislocation density is relatively low (). However, the highly densified dislocations can be seen in the matrix of quenched specimen as shown in , and most of the dislocations are tangled. It can be suggested that the strength of the matrix in the quenched specimen is higher than that in the slowly cooled specimen due to the dislocation strengthening.Because of the large difference between the CTEs of SiC whisker and aluminum, the TMS is introduced in the composite on cooling from 580 °C. At higher temperature, the yield strength of matrix is very low, which results in the relaxation of TMS and the plastic deformation of matrix. The plastic deformation may cause the dislocation generation in the matrix of the composite. Otherwise, if the cooling rate is enough slow, the recovery process of the matrix can also take place, which leads to the annihilation of dislocation. Therefore, the dislocation state in the matrix is controlled by the competition between the dislocation generation and annihilation processes. When the composite is cooled down from higher temperature with a lower cooling rate (e.g. 1 °C/min), the recovery process can take place and the dislocation annihilation rate may be greater than the dislocation generation rate. If the composite is cooled down from higher temperature with a higher cooling rate (e.g. quenching), the dislocation annihilation takes place very difficulty, which suggests that the dislocation generation rate is higher than the annihilation rate. On the basis of above analysis, it can be concluded that the dislocation density in the slowly cooled specimen is lower than that in the quenched specimen.. It can be found that the TMS in the slowly cooled specimen is lower than that in the quenched specimen. The TMS in the matrix of composites is tensile, which is in agreement with the results of other researches When the yield strength of matrix of the composite is relatively low, the TMS resulting from a large difference in CTE between SiC whisker and aluminum may be relaxed via plastic deformation of the matrix at higher temperature on cooling. Only when the yield strength of matrix is higher than TMS at a certain temperature, Tc, the TMS can be kept below the temperature of Tc on cooling. It is obvious that the lower the yield strength of matrix, the lower the Tc, and the lower the residual TMS at room temperature. When the composite of the matrix is quenched from high temperature, the dislocations in the matrix can not be recovered enough, and the higher yield strength of the matrix is obtained. For slowly cooled composite, the dislocations in the matrix are recovered at higher temperature on cooling due to lower cooling rate, which results in the lower yield strength of the matrix. Therefore, the TMS in quenched composite is higher than that in slowly cooled composite.The relative elongation and CTE are defined as follows:where, εT is the relative elongation, L0 the original length of the specimen, L the length of a specimen at temperature T, α the CTE. The relative elongation and CTE of pure aluminum as functions of temperature are given in . It can be seen that the CTE of pure aluminum increases slightly with increasing temperature when the temperature reaches above 100 and 150 °C under heating rates of 2.5 and 8 °C/min, respectively. Comparing , it can be seen that the CTE of pure aluminum subjected to a higher heating rate is greater than that with lower heating rate.The relationship between CTE and T for pure aluminum is approximately linear when the temperature is higher than 100 °C for the heating rate of 2.5 °C/min and 150 °C for the heating rate of 8 °C/min, which can be written as:where, αm is the CTE of pure aluminum, A and B are constants depending on heating rate of CTE measurement, A is about 25×10−6/°C for the heating rate of 2.5 °C/min and 24×10−6/°C for the heating rate of 8 °C/min; B is about 0.64×10−8/°C2 for the heating rate of 2.5 °C/min and 1.3×10−8/°C2 for the heating rate of 8 °C/min.The curves of relative elongation and CTE versus temperature for the quenched composite are shown in . It can be seen that the CTE curve of the quenched composite is quite different from that of pure aluminum. When the heating rate is 2.5 °C/min, the CTE versus temperature curve has a smaller minimum value (denoted by arrow in ) at about 110 °C and a maximum peak at about 245 °C as shown in . However, when the heating rate is 8 °C/min, only one maximum peak in the CTE versus temperature curve can be found as shown in . As observed for pure aluminum specimen, the CTE of the quenched composite with heating rate of 8 °C/min is greater than that with heating rate of 2.5 °C/min.The curves of relative elongation and CTE versus temperature for the slowly cooled composite are shown in . An obvious feature in slowly cooled composite is that no clear maximum peak can be found in CTE versus temperature curves. When the heating rate is 2.5 °C/min in CTE measurement, there exists a very small minimum value (denoted by arrow in ) in CTE versus temperature curve of the slowly cooled composite (shown in ), but it is smaller than that of the quenched composite. Except for the weaker minimum peak in CTE versus temperature curve with cooling rate of 2.5 °C/min, the CTEs of slowly cooled composite with heating rates of both 2.5 and 8 °C/min increase with increasing temperature. It can also be seen that the CTE of slowly cooled composite with heating rate of 8 °C/min is greater than that with heating rate of 2.5 °C/min., one can find that CTE of SiCw/Al composite is very sensitive to the thermal route. The obvious difference of thermal expansion behavior between the quenched and slowly cooled composite suggests that the CTEs of SiCw/Al composite can be controlled by heat treatment. Our research where, σm and σf are the TMSes of matrix and whisker in the composite, vm and vf the volumes of matrix and whisker, Km and Kf the bulk moduli of matrix and whisker, βm and βf the bulk CTE of matrix and whisker, dvm and dvf the changes of matrix and whisker volume in the temperature interval from T to T+dT in the composite. From If small quantities are neglected, as proved in , the following equation can be obtained:where, Vm and Vf are volume fractions of matrix and whisker, αm and αf CTEs of pure aluminum and SiC whisker, and αc the CTE of composite.Because Km is much smaller than Kf, that αc is greater or smaller than αm depends on the sign of dσm/dT. As the CTE of aluminum matrix is much greater than that of SiC whisker, the tensile TMS in the matrix can change from tensile to compressive one on heating of CTE measurement. It is clear that relaxation of tensile TMS and increasing of compressive TMS in the matrix lead to dσm/dT<0, resulting in αc<αM. When the compressive TMS in the matrix is relaxed, dσm/dT<0 and αc<αM.The tensile TMS in the matrix of SiCw/Al composite can be relaxed by the following processes.First, when the composite were heated, the expansion of the matrix is greater than that of SiC whisker, the tensile TMS in the matrix can be relaxed only by the difference of coefficients of thermal expansion between the matrix and reinforcement, and no plastic deformation occurs in the matrix. This relaxation mechanism is named as thermal elastic relaxation. With increasing temperature the process takes place further during the CTE measurements.Second, because the yield strength of the matrix decreases with the increasing temperature, the TMS may be relaxed by the plastic deformation of matrix Because the thermal expansion of matrix is always larger than that of SiC whisker, the compressive TMS in the matrix can be relaxed only by thermal plastic relaxation.When the composite is heated to a certain temperature from room temperature during CTE measurement, the tensile yield strength of matrix decreases and the TMS may exceed it and the thermal plastic relaxation may take place, which results in the additional decrease of the CTE of the composite. The small minimum vales (shown in ) in CTE versus temperature curves with the heating rate of 2.5 °C/min for both quenched and slowly cooled composites are corresponding to the thermal plastic relaxation of tensile stress in the matrix. As the stress in slowly cooled composite is lower than that in quenched one (as shown in ), the quantity of tensile TMS relaxation caused by matrix plastic deformation in the slowly cooled composite is smaller than that in the quenched composite. Therefore, the minimum vale in the CTE curves of slowly cooled composite is smaller than that of the quenched composite (shown in ). When the heating rate is faster in the CTE measurement (8 °C/min in the present study), the plastic deformation can not take place greatly at lower temperature because of short holding time of the specimen in a certain temperature interval. For this reason, the minimum vale can not be seen in the CTE curves (shown in As mentioned above, the TMS in the matrix may be changed from tensile stress to compressive one with increasing temperature. When the compressive TMS in the matrix is higher than the yield strength of the matrix, the compressive TMS in the matrix can also be relaxed by plastic deformation of matrix, which leads to the positive value of dσm/dT and results in the CTE increasing. For the quenched composite, the dislocation density and yield strength of the composite are higher, the compressive TMS in the matrix at elevated temperature may accumulate quite high value before the compressive stress relaxes due by matrix plastic deformation. The maximum peak (see ) in CTE curves of the quenched composite suggests that the initially compressive plastic deformation in the matrix takes place very fast, and positive dσm/dT increases sharply. After rapid plastic deformation of the matrix, the compressive TMS decreases more and more slowly, which causes the positive value of dσm/dT decreasing and the maximum peak presenting in the CTE curves for quenched composite. For the slowly cooled composite, the dislocation density and compressive yield strength is lower. The compressive TMS can not be accumulated a high enough value, so the maximum peak disappears in the CTE curve.According to above analysis, because the CTE of the composite is dependent on the relaxation of TMS of SiCw/Al composite, the changing tendency of the TMS in matrix of the composite can be discussed. The changing tendencies of TMS in the quenched and slowly cooled composites are schematically illustrated in , respectively. In the figures, it is assumed that the CTE of SiC whisker is constant within the temperature range of the CTE measurements ), the CTE from points A to D in CTE curve is smaller than αM, which means dσm/dT<0. According to theoretical models such as Turner’s For the slowly cooled composite, the changing tendency of TMS in the matrix of the composite can also be analyzed (as shown in ) using the similar point of view in the above analysis. Because of lower yield strength and dislocation density in slowly cooled composite, the absolute compressive stress in the matrix can not be accumulated to a higher value and no rapid relaxation of internal compressive TMS in the matrix with the mechanism of thermal plastic relaxation takes place. Therefore, no maximum peak in slowly cooled composite can be observed, and the CTE of the composite from point D to E in The dislocation density and TMS of SiCw/Al composite after annealing at 580 °C are affected by cooling rate. The dislocation and TMS in the quenched composite are higher than those in the slowly cooled composite.The thermal expansion behavior of quenched SiCw/Al composite is quite different from that of slowly cooled composite. The CTE versus temperature curve of quenched composite has a minimum vale and a maximum peak, but no maximum peak can be found for slowly cooled composite.A simplified analysis indicated that the CTE of composite depends on changing rate (dσm/dT) of TMS on heating for the CTE experiments; consequently, the microstructure and TMS in the matrix have great effects on the thermal expansion behaviors of SiCw/Al composites.On the basis of the relationship between the CTE and changing rate of TMS, the changing tendency of TMS in SiCw/Al composite can be obtained.As the TMS in whiskers and matrix must be balance, that is to say:Let the volume of composite be v, then v=vm+vf, and dv=dvm+dvf. , where βc is the CTE of composite, the following equation can be obtained from above equation:where Vm and Vf are the volume fractions of matrix and whisker. It is obvious that dVm+dVf=0.Using the relationship of βc=3αc for randomly reinforced composite, Hydrogen diffusivity in B-doped and B-free ordered Ni3Fe alloysThe hydrogen diffusion coefficient of the ordered Ni3Fe–B alloys with and without boron additions was measured by a method of the cathodical precharging with hydrogen. The apparent hydrogen diffusion coefficient decreases with increasing the boron concentration doped in the ordered Ni3Fe alloy. Comparing with the B-free ordered Ni3Fe alloy, the activation energy of hydrogen diffusion for the ordered B-doped Ni3Fe alloy increases by as high as 42% when the boron content is sufficient. The doping boron in the Ni3Fe alloy is effective in reducing the hydrogen diffusion at the grain boundary.There is a disorder-order transformation at 500 °C for L12 type intermetallic compound Ni3Fe. Because there are no active elements, the Ni3Fe alloy does not exhibit environment embrittlement induced by water vapor at ambient temperatures. Usually, a fully disordered intermetallic Ni3Fe is not susceptible to hydrogen embrittlement, while an ordered intermetallic Ni3Fe is susceptible to embrittlement in gaseous hydrogen The materials Ni3Fe (the nominal compositions Ni-24 at.%Fe undoped and doped with 0.005, 0.01, 0.02 and 0.04 wt.% boron) were melted in argon atmosphere using raw materials of high-purity Ni, Fe and Fe-19.7 wt.%B interalloy. The alloy ingot was hot forged and hot rolled at 1050 °C to a plate of 2-mm thickness. After solution treatment at 850 °C, the plate was further cold rolled to sheets with a thickness of about 1 mm. Tensile specimens with a gage section of 10 × 2 × 0.9 mm were electric-discharge machined from the sheets. The disordered samples were obtained firstly by annealing cold rolled sheets at 800 °C for 2 h, quenching in air. For ordering treatment, the disordered specimens were annealed at 470 °C for 200 h in evacuated quartz tubes, followed by furnace cooling. The degree of order of the Ni3Fe alloy was 0.40 after this ordering treatment Tensile specimens were mechanically polished to remove the surface oxide layer. The sheet specimens were cathodically precharged with hydrogen at 15 °C, 30 °C or 45 °C for 5 h, respectively. The method of the cathodically precharged with hydrogen is described in article is a typical SEM fractographs of the ordered Ni3Fe specimens precharging hydrogen for 5 h at different temperatures. The fracture mode of the H-charged specimens shows a change from the surface to the center, predominantly intergranular (IG) near the surface and transgranular (TG) in the interior ( shows the SEM fractographs of the ordered Ni3Fe-0.04 wt.%B specimens precharging hydrogen for 5 h at different temperatures. The fractographs of the ordered Ni3Fe-0.005 wt.%B, 0.01 wt.%B and 0.02 wt.%B specimens at the same precharging conditions is similar to that of the ordered Ni3Fe-0.04 wt.%B specimens (). The difference is only the depth of IG fracture of these alloys. From , it can be seen that the depth of the IG fracture increases with increasing precharging temperature for the same composition of the ordered Ni3Fe alloy. The depth of IG fracture of the boron-free ordered Ni3Fe samples is larger than that of the ordered Ni3Fe-0.04 wt.%B samples at the same precharging temperature. It is known that there is hydrogen content profile from surface to center of the samples after hydrogen precharged. The hydrogen content is the highest on the surface and the lowest in the center for the precharged samples. According to the hydrogen content profile and the changes of fracture mode, there exists a critical hydrogen concentration for the ordered Ni3Fe alloy, above which the fracture mode of the B-free and B-doped ordered Ni3Fe alloys changes from transgranular to intergranular, similar to Ni3Al alloys The depth of IG fracture was measured in situ SEM by averaging twenty measurements from both sides on each specimen. shows the average depth of IG fracture measured for the ordered Ni3Fe alloy with various boron concentrations at different temperatures. It can be seen from this table that the depth of IG fracture decreases with the increment of boron content doped in the ordered Ni3Fe alloys at the same precharging temperature. The apparent hydrogen diffusion coefficient (DA) of the ordered Ni3Fe–B alloy may be calculated by the depth of IG fracture and the time lag method . The hydrogen diffusion coefficient is direct proportional to the square of the depth of IG fracture in the time lag method. Therefore, it can be seen from that the change of the apparent hydrogen diffusion coefficient of the ordered Ni3Fe alloy is consistent with change of the depth of IG fracture when changing the precharging temperature or boron concentration doping. The results of demonstrate that boron doped in the Ni3Fe alloy has a strong effect of reducing the hydrogen diffusion along the grain boundaries.The relationship between the apparent hydrogen diffusion coefficient (DA) of the ordered Ni3Fe alloy doped with various boron concentrations and reciprocal absolution temperature is shown in . There is a good linear relationship between lnDA and T−1 for the ordered Ni3Fe–B alloy. From , the activation energy of hydrogen diffusion (Q) can be estimated for the ordered Ni3Fe–B alloys and is also listed in . There exist two activation energies of hydrogen diffusion for the ordered Ni3Fe–B alloys when the boron concentration changes from zero to 0.04 wt.% in The ordered Ni3Fe alloy, just like other ordered intermetallics, shows the environmental embrittlement in gaseous hydrogen at room temperature. In our previous work ), it can be see that the effect of boron on the hydrogen diffusion for the ordered Ni3Fe alloy is similar to that in Ni3Al alloys Similar to the ordered Ni3Al–B alloy, our study has indicated that boron has a tendency to segregate to the grain boundary of the ordered Ni3Fe–B alloy ) show that boron doping in the ordered Ni3Fe alloy can effectively reduce the hydrogen diffusion coefficient. The activation energy of hydrogen diffusion doesn’t keep constant in the ordered Ni3Fe-B alloy when the boron content doping increases from zero to 0.04 wt.% in . There exist two activation energies of hydrogen diffusion for B-doped ordered Ni3Fe alloy. One activation energy of hydrogen diffusion is 20.8 kJ/mol when the level of boron doping is less than or equal to 0.005 wt.%; another is 29.6 kJ/mol when the level of boron doping is greater than or equal to 0.01 wt.%. The activation energy of hydrogen diffusion for the ordered Ni3Fe–B (B ≥ 0.01 wt.%) alloy increases by as high as 42%, as compared with the ordered B-free Ni3Fe or Ni3Fe-0.005 wt.%B alloy (see ). This result shows that the activation energy of hydrogen diffusion depends on the occupancy of boron atoms in the grain boundaries of Ni3Fe alloy. The changes of the activation energy of hydrogen diffusion in the ordered Ni3Fe–B alloy indicate that there is the critical boron content, which is about 0.01 wt.%. The mechanism of hydrogen diffusion in the ordered Ni3Fe alloy changes when the boron content doping is above the critical boron content. This result indicates that the beneficial effect depends on the boron concentration.Chung and his group had studied the trapping effect and the ductilization effect of boron on hydrogen embrittlement in Ni3Al alloy . It is interesting to point out that atomic ordering in Ni3Fe also has a marked effect on surface chemical reactivity The present results clearly show that hydrogen precharging results in intergranular fracture near the surface and transgranular in the interior of the ordered Ni3Fe samples with and without boron additions. The apparent hydrogen diffusion coefficient decreases with increment of the boron content doping in the ordered Ni3Fe–B alloy at the same precharging temperature. The apparent hydrogen diffusion coefficient of the ordered B-free Ni3Fe alloy is about 3–4 times larger than that of the ordered Ni3Fe-0.04 wt.% B alloy. There exist two activation energies of hydrogen diffusion for the ordered Ni3Fe–B alloy when the boron concentration changes from zero to 0.04 wt.%. There is the critical boron content (about 0.01 wt.%) for the ordered Ni3Fe–B alloy, above which the boron atoms can effectively trap the hydrogen atoms on the grain boundary and lower the rate of hydrogen diffusion. All the results agree well with the segregation of boron to Ni3Fe grain boundaries and the trapping of hydrogen atoms by boron atoms, as evidenced recently by Auger analyses.Edge impact modeling on stiffened composite structuresFinite Element Analysis of low velocity/low energy edge impact has been carried out on carbon fiber reinforced plastic structure. Edge impact experimental results were then compared to the numerical “Discrete Ply Model” in order to simulate the edge impact damage. This edge impact model is inspired to out-of-plan impact model on a laminate plate with addition of new friction and crushing behaviors. From a qualitative and quantitative point of view, this edge impact model reveals a relatively good experiment/model agreement concerning force–time and force–displacement curves, damage morphology or permanent indentation after impact. In particular the correlation is faithful concerning the results of the parameters retained by industry; the maximum crack length on the edge and the permanent indentation.Finally, it can be noticed that the model quickly answers in crushing mode and goes in an inadequate way from the dynamic behavior to the quasi-static behavior. In order to correct this problem it seems necessary to implement a strain rate effect in the behavior law on the fiber failure in compression. The next step is to apply this model to the compression after impact.Aeronautics integrates many composite structures. Unfortunately, during a manufacturing operation, these structures could be significantly damaged by a foreign object and at the same time the damage occurring might remain undetected by visual inspection a). They are extremely loaded and are designed to resist buckling to keep the structure safe, but if a tool drops on the stringer edge during the plane’s maintenance, its residual properties can be drastically reduced Nowadays, structural stiffeners are mostly used for protection against edge impacts, which needs improvement as additional weight, and is a major concern in aircraft industry. Therefore, it is important to study in detail the edge impact phenomenon and to define the damage scenario, in order to identify the parameters that affect the residual strength after impact. By the way, it will be possible to improve the stringer’s impact damage tolerance.The proof of the impact resistance, depending upon the impact damage detectability, has to be made in order to certify these structures for aeronautical industry, which is the concept of damage tolerance Dent depth and crack length drive the current edge impact detectability threshold criterion for aeronautic fields (b). When the impact indentation is smaller than the barely visible impact damage (BVID) the structure has to support the extreme loads that it is subjected to. However, if the damage is detectable, i.e. when the impact indentation is bigger than the BVID, another criterion must be considered, such as sustain limit loads, repair or change the structure Composite skin impact issues, and the damage mechanism However, if the focus is shifted from skin to edge, then there seems that the damage tolerance knowledge is missing. As far as the author is concerned, only two researches have been conducted in this regard The typical load–displacement curve of composite laminate under progressive crushing is shown in b). The first one is known as the splaying mode (a) in which bundles of bending delaminated lamina splay on both sides of a main crack, and the broken fibers and resins trapped at the crushing zone can lead to the formation of debris wedge on the surface of the crushing platen. The second one is called the fragmentation mode (c) in which the plies sustain multiple short length fractures due to pure compression, transverse shearing and sharp bending, which lead to the formation of small fragments in the crush zone. These two failure modes are also observed during the edge impact test The aim of this paper is to define an edge impact modeling in order to compare its results with experiments where a vertical drop-weight testing device has been used to perform the edge impacts on different stacking laminates. Precise microscopic examination and X-ray analysis have also been done to closely visualize the damage scenario First of all, a test specimen has been fabricated to perform preliminary understanding of the phenomenon, which is a representative of the current needs identified above. T700/M21 UD carbon prepreg has been selected, which is a well-known aircraft material The following two different stacking sequences have been studied:Stacking A: [902, −452, 04, 452, 02]S, 6 mm-thick for 24 pliesStacking B: [452, 02, −452, 04, 902]S, 6 mm-thick for 24 plies.The present work follows a previous experimental study, and in order to help the reader understanding of this paper, the main conclusions of the edge impact experimental study If fibers are oriented in the impact direction, then kink-bands () are created (dynamic and static loading).In case of the dynamic test, regardless of the energy level (10, 20 or 35 J) and stacking sequence, a specific crushing plateau phenomenon appears. This crushing plateau can be modeled multiplying an average crushing stress of 250 MPa by an average projected area of impact Spi
≈ 25 mm2 (). In this case, it can be said that the matrix properties control the crushing plateau.Stacking sequence has a relatively small influence on the impact damage, which can be due to the fact that for each stacking presented in this paper For the dynamic impact, irrespective of the energy level and stacking sequence, the force–displacement curves have similar initial stiffness. This initial dynamic force can be evaluated by multiplying the contact surface of each fiber orientation by the fiber compressive failure strength. Therefore, it can be concluded that the fiber properties control the initial impact stiffness (In the quasi-static indentation case, the material is directly crushed. The initial static force can be evaluated by multiplying an average crushing stress of 250 MPa by the projected theoretical surface of the impactor, during the initial phases of the indentation experiment. So, the properties of the matrix control the initial indentation stiffness (There is no equivalence between static/dynamic edge impacts (). During static edge impact, the impactor shape quickly destabilizes the fibers and leads to the development of kink-bands and a crushing phenomenon (The first peak in the indentation force curve is equal to the crushing plateau force value of 6250 N. Furthermore, the behavior after the impactor displacement of 0.5 mm is more difficult to explain. It can be assumed that there is a partial increase of the surface crushing; however, the authors have not verified this hypothesis.). Nevertheless in order to simulate loadings such as edge impact or crushing, the crushing process should be taken into account. Indeed such tests induce high compressive loading leading to crushing process and to high compressive strains in the transverse and longitudinal directions. In the next paragraph, the “Discrete Ply Model” (DPM) will be presented and the crushing modeling will be particularly focused. Firstly the behavior in the transverse direction, i.e. the matrix cracking and the transverse crushing, will be presented. Afterwards the delamination modeling will be briefly reminded; interface elements with cohesive zone Matrix cracking is taken into account in the DPM using interface element normal to the transverse direction (). The onset of damage of these interface elements is based on Hashin’s theory where σt, τlt and τtz are respectively the transverse stress, the shear stress in the (l,
t) plane and the shear stress in the (t,
z) plane, evaluated in the neighboring volume elements, σtf,t the transverse failure stress in tension and τltf the failure shear stress This criterion is assessed at each time increment and the interface stiffness becomes zero if the criterion is reached, i.e. the matrix develops cracks, and remains intact in otherwise. The initial stiffness of the interface element is chosen very high, typically 106
MPa/mm.As previously mentioned, the transverse crushing must be taken into account. Israr et al. ) and its value is similar to the compressive matrix failure stress. Then a crushing plateau is applied in transverse direction () in order to represent, at the same time, the compressive matrix failure and the crushing in this direction.Moreover, to simulate the plastic deformation εtpl due to transverse crushing, a plastic behavior is imposed using a yield function ft:where σtcrush is the crushing stress and the transverse stress σt is evaluated using:σt=Hlt·(1-df)·(εl-εlpl)+Htt·(1-df)·(εt-εtpl)+Htz·(1-df)·(εz-εzpl)where Hlt, Htt and Htz are the elasticity stiffness, εl (εz) the longitudinal (out-of-plane) strain, df the damage in the fiber direction and εlpl,εtpl,εzpl, the plastic strain respectively in the l, t and z-directions. The fiber damage df and the plastic longitudinal strain εlpl will be explain in the next paragraph, and the plastic out-of-plane strain εzpl is due to the expansion in the z-direction due to crushing in the transverse direction considering a constant volume:This relation makes possible the coupling between the transverse and out-of-plane directions during a t-crushing. In parallel, it would be necessary to consider a crushing in the z-direction and to take into account the coupling between these 2 crushings. This work has not been done so far because the z-crushing is not present in the edge impact. Moreover the coupling between the transverse crushing and the fiber crushing is not considered due to the lack of data and because it is not considered of first importance. The fiber crushing will be explained in the next paragraph and will allow determining of εlpl.In fact the coupling between plastic strains in t and z-directions (Eq. ) could lead to high positive strain in the out-of-plane direction and to increasing of the crushing stress σtcrush. But the results of Israr et al. where σtf,c is the transverse failure stress in compression and λzpl the plastic elongation in the z-direction. This elongation λzpl makes it possible to take into account the variation of the element size according to z and represents the increase (if it is higher than 1) of plastic size, that is to say the z-size increase corresponding to the t-crushing. This increase is due to remaining debris; therefore it cannot physically increase the crushing force. This expansion is obtained by integrating the plastic deformation according to z:In the initial position, λzpl is obviously equal to 1 and can only increase because εtpl is negative due to crushing and then εzpl is positive due to the coupling (Eq. ).Moreover in order to limit the excessive expansion due to crushing process, λzpl is averaged on each volume element and is limited at a maximum value:λzpl=averagei=1,8λzpl(i)λzpl=minλzpl,λmaxwhere λzpl(i) is the plastic out-of-plane elongation of the ith integration point, considering the volume elements are C3D8 with 8 integration points, and λmax is the maximum plastic out-of-plane elongation taken equal to 2 in this study. In the same way, in order to avoid excessive distortion of volume elements the plastic strains are limited to a minimum value εmin. To do that, the crushing stress σtcrush is increased with exponential function versus plastic strain:Ifεtpl<εminthenσtcrush=σtf,cλzpl·exp-k·εtpl-εminwhere k is taken high enough, usually equal to 2, and εmin is usually taken equal to −1.6, that is to say to approximately 80% of the initial height. Of course, in order to simulate higher crushing process, it should be necessary to remove volume element. But in the present case, this phenomenon is not considered of first importance and is not taken into account.Then permanent indentation is managed in the matrix cracking elements. Indeed, a large part of the permanent indentation seems to come from remaining debris in cracks oriented at 45° in the ply thickness Permanent indentation plays a crucial role in the detection of the damage and aeronautical certification damage tolerance policy Delamination damage consists with important cracks between plies. It is typically modeled with cohesive interface elements based on fracture mechanics ). These delamination interface elements are written in mixed fracture mode (mode I, II, III) to simulate the energy dissipated by delamination. Moreover the shearing (II) and tearing (III) fracture modes are combined and mode II is supposed equal to mode III. And in order to represent the overlap of 2 consecutive plies, the 0° and 90° plies are meshed with square elements and the 45° and −45° plies are meshed with diamond-shaped elements where GIc, GIIc and GIIIc represent the critical energy release rate (ERR) of delamination in mode I, II and III, respectively. Then, thanks to energy dissipation of fracture mechanics, the delamination criteria presents a classical behavior of the cohesive zones with a linear propagation of the stress function of the displacement The fiber failure plays a great role on the impact damages and on the crushing (Therefore, to be able to produce the critical ERR due to fiber fracture per unit area of crack where σl is the longitudinal stress, V the volume of the element, S the section normal to the fiber direction l, εl the total strain degradation of the fiber stiffness and GIf the critical ERR in opening mode in the fiber direction (). Then the fiber stiffness is classically degraded by using a damage variable df. This damage variable is conventionally calculated compared to the longitudinal strain with the aim to obtain a linear decrease of the longitudinal stress:where ε0T (ε0C) is the starting strain degradation of the fiber stiffness in tension (compression) and ε1T (ε1C) the total strain degradation of the fiber stiffness in tension (compression).Obviously the critical ERR is different under tension or compression loading b) and kink-bands are observed under compression loading (where GIf,t (GIf,c) is the critical ERR in tension (compression). These equations (Eqs. ) makes it possible to determine εlT and εlC and the Eq. makes it possible to evaluate the fiber damage df. Then the longitudinal stress in tension is calculated as:Ifεl>0thenσl=Hllt·(1-df)·εl+Hlt·(1-df)·(εt-εtpl)+Hlz·(1-df)·(εz-εzpl)where Hllt is the elasticity stiffness in the longitudinal direction in tension which is different of this one in compression Hllc. If the longitudinal strain is negative, the problem is more complex because it is necessary to distinguish the fiber behavior before and after crushing. Before the crushing the stress is evaluated similarly to the tension case:σl=Hllc·(1-df)·εl+Hlt·(1-df)·(εt-εtpl)+Hlz·(1-df)·(εz-εzpl)But when the crushing starts, i.e. when the strain εl reaches the crushing strain εlcrush for the first time, a plastic longitudinal strain εlpl is added and the stress is evaluated as:σl=Hllc·εl-εlpl+Hlt·(1-df)·εt-εtpl+Hlz·(1-df)·εz-εzplThe problem of this formulation is to induce a discontinuity of the plastic strain εlpl at the moment of the crushing starting: the plastic strain is null before crushing and reaches the crushing strain εlcrush when the crushing starts. This point will have to be focused and a solution could be to manage the fiber damage in compression using a plastic strain εlpl in spite of the damage parameter df. This work is in progress.Moreover as the crushing process in longitudinal direction is supposed independent of the matrix crushing process (in t and z-directions), the crushing stress σlcrush is supposed constant and equal to the matrix failure stress:It means in particular that the mean crushing stress is supposed the same in longitudinal and in transverse direction, as shown by Israr et al. Finally the plastic strain εlpl can be determined using a yield function fl and the corresponding crushing stress:And contrary to the transverse direction, no coupling is considered between the longitudinal plastic strain and the transverse and out-of-plane plastic strains. This means the different crushing processes in the fiber direction and in the (t,
z) plane are supposed totally independent. Of course this hypothesis should be confirmed by specific experimental tests and this assumption must be questioned if necessary.It could be also noticed that with this approach the other fiber failure modes (II and III) are not taken into account because data are missing and they are judged of secondary importance The last step to achieve to correctly simulate the crushing is to transmit the crushing information between consecutive elements in the longitudinal direction. In the same way than Israr et al. ) to the two neighboring elements (or with the neighbor element if it is an edge element). Indeed once the crushing process is initiated, the neighboring elements cannot reach any more the failure compression stress according to longitudinal direction, nor to dissipate GIf,c.In the same way than the transverse crushing, in order to avoid excessive distortion of volume elements, the longitudinal plastic strain is limited to a minimum value εmin. To do that, the crushing stress σlcrush is increased with exponential function versus plastic strain:Ifεlpl<εminthenσtcrush=σtf,c·exp-k·εlpl-εminwhere k is taken high enough, usually equal to 2, and εmin is usually taken equal to −1.6, that is to say to approximately 80% of the initial height.This model based on DPM makes it possible to take into account the loss of stiffness of the specimen due to the impact damage and the delaminated surfaces shape In particular the integration of the crushing, which is done in this paper, is the first step to generalize the DPM to severe solicitations. Of course this work is only a preliminary job and some hypotheses should be detailed and confirmed, or abandoned if it proves to be wrong. In particular these points should be focused and discussed:The out-of-plane crushing is not simulated because it is considered of second importance for the edge impact.The transverse crushing is supposed to create expansion only in the out-of-plane direction.Only the first mode of fiber failure is taken into account.The coupling between crushing in fiber direction and crushing in the plane normal to the longitudinal direction is neglected. In particular the expansion in the transverse and out-of-plane directions due to longitudinal crushing is not taken into account and vice versa.The compressive fiber failure is simulated using a damage parameter and the corresponding crushing by a plastic model; these two approaches could be mixed.An important point of the edge impact is the friction between the impactor and the specimen. Indeed the friction influences the opening of the composite plate during the edge impact and then should be studied. An experimental test has been carried out in order to measure the friction coefficient under conditions representative of the edge impact test Some studies on the friction between steel and composite were already carried out in the literature ) was carried out on friction between a machining tool and a laminate carbon The major result of this study lies in the relatively low friction coefficient observed during the experiment (b) from approximately 0.1. In order to benchmark this value, a friction test was carried out on a 100 kN electro-mechanics Instron machine (The experimental set-up consists with a composite specimen glued on a support translational with the frame. A normal force is imposed using a 16 mm-diameter spherical impactor on the composite specimen and the force necessary to move the support is measured (A UD plate was manufactured with all the plies directed in the same direction: [030], that is to say a 7.5 mm-thickness for 30 plies of T700/M21 carbon UD prepreg. Five specimens are then cut out with five fiber orientations: [030], [4530], [6030], [8030] and [9030]. This study makes it possible to study the impactor/specimen friction according to the fiber orientation. Finally specimen dimensions are 150 mm-length, 30 mm-height including 5 mm out of the boundary conditions. The tests are carried out dry, i.e. without oil or grease.A normal effort N is applied to the specimen using the 16 mm-diameter spherical impactor of the edge impact test. A guide with bearing is positioned between the specimen and the frame to leave free the specimen translation. Then the tangential force T is increased until reaching the slip which makes possible to obtain the friction coefficient f:The normal force versus tangential force is plotted A similarity of the behavior, whatever the orientation of UD specimen, could be noted with a higher value for 60° and 90° specimens. Due to the low number of experimental tests, it is difficult to conclude with the effect of the fiber orientation on the friction coefficient and thereafter it is supposed constant. In conclusion a friction coefficient of 0.06 is evaluated whatever the fiber orientation; this friction coefficient value will be used in the FE model. This value, although low, is in relative good agreement with the Mondelin et al.’s study The objective of this paragraph is to test from a qualitative and quantitative point of view the behavior law proposed in the last paragraphs and to compare its results to the edge impact experiments carried out in ). The bottom part of the plate is clamped and the initial velocity of the 16 mm-diameter and 2.368 kg mass impactor is imposed to obtain the desired impact energy level. The volume element size is fixed to 1 mm-long and 1 mm-width () to obtain a good representation of the plate avoiding a too long calculation time. One volume element is used for each ply, or more precisely for each plies group in the same direction; that to say 9 plies for stacking A [902, −452, 04, 452, 04, 452, 04, −452, 902] and B [452, 02, −452, 04, 904, 04, −452, 02, 452] (An explicit dynamic analysis has been carried out. The cases of stacking A and B impacted at energy level of 10, 20 and 35 J, are presented. After the validation of this edge impact model, the model could directly be applied to the compression after impact modeling; it is the next step of this work.Experiment/model visual comparison of the damage can be carried out. In particular, the cut sections just under the impactor make it possible to have a first idea of the damage numerically obtained (). It can be noticed in this figure that edge impact model causes the delamination of all the interfaces as well as the experiment.It can be noticed using the cut sections of stacking A (a) that the interfaces 90°/−45°(1) and −45°/0°(2) are delaminated in the experiment and model. Failure of the offset 0°(3) and center 0°(4) plies is also qualitatively well simulated. The permanent indentation under the impactor seems also qualitatively well restored. In the case of stacking B (b), the interfaces 45°/0°(1) are delaminated on an asymmetrical way in the experiment specimen whereas the model delamination is symmetrical. Delamination of the interfaces −45°/0°(2) seems also numerically underestimated, even if the experimentally obtained delamination is asymmetrical which could explain the discrepancy. Finally the failure of the center 90°(3) plies presenting kink-bands on the experiment is well modeled as it can be seen in where the fiber damage numerically obtained is plotted.The model thus seems to restore qualitatively fiber failure, matrix cracking and delamination observed during the experiment in an adequate way. From a quantitative point of view the first step to achieve is the study of the force–time curves (These curves show an acceptable correlation in terms of total impact time and force fall; the phenomenon is thus more or less well restored in time for the two stacking sequences. It can also be noticed that the maximum force is systematically underestimated for the two stacking sequences. The model of stacking B does not seem to present a force plateau (following the maximum force) such as raised in experiments and this whatever the impact energy level, contrary to the stacking A which presents force plateau similar to the experiment. The second step to achieve consists with the study of the force–displacement curves (a–c), the force increases gradually and reaches a maximum force. Then this force falls and reaches a plateau of a value from approximately 6 kN whatever the impact energy level. The displacement direction of the impactor is finally reversed, the effort falls and a permanent indentation remains. The force–displacement curve of the model is then in relatively good agreement with experiment, even if the force peak is underestimated.d–f), the force increases gradually and reaches a maximum force without reaching a crushing plateau. At maximum displacement, there is a sharp fall of the force and a permanent indentation remains. The force–displacement curve of the model is overall in bad agreement with experiment. This discrepancy can be explained by an excessive out-of-plan swelling (a) numerically obtained. Indeed, it lies between two times (stacking A at 20 J; It is interesting to superimpose the force–displacement curves of the model with those of the edge impact and edge indentation (corresponding to quasi-static loading) experiments (It can be noticed that the model quickly answers in crushing mode and seems to pass in an inadequate way from the dynamic behavior to the quasi-static behavior. In order to correct this problem it seems necessary to implement a strain rate effect in the behavior law of the fiber failure in compression ). Then it will be necessary in the future to take into account more accurately the passage between crushing process and fiber failure, and in particular to account for the strain rate effect in the behavior law of the fiber failure Then the evolution of the maximum damage depth (b) can be drawn according to the impact energy stacking.The projected delaminated area presents a good agreement between experiment and modeling for stacking A whereas modeling of stacking B underestimates this delaminated area of 55% on average. Nevertheless the damage form seems faithful simulated for stacking A (d). Finally, a relatively good experiment/modeling agreement is revealed concerning the results of the parameters retained by industry; the maximum crack length on the edge (b). Once again, higher is the impact energy, longer the crack is.It is interesting to perform a sensitivity study of the friction coefficient using the model and in particular to evaluate the accuracy of the measured value of 0.06. Indeed this very low friction coefficient is close to the value commonly used of 0.1 for a contact between two lubricated surfaces (steel–iron) whereas a value closer to 0.3, commonly used for two dry surfaces (Steel–Cast iron type), could be expected. We thus propose to compare these three values with the edge impact model of the stacking sequences A and B. The force–displacement curves () clearly make possible to identify the influence of the friction parameter.It is noticed that the friction coefficient acts particularly on the two results of the edge impact modeling which are the permanent indentation and the maximum force. When the friction coefficient increases, the permanent indentation decreases and the maximum force increases. It is interesting and reassuring to observe the model with a coefficient of friction of 0.06 presents the best experiment–model correlation.This paper presents the major modifications provided to the DPM of From a qualitative and quantitative point of view, edge impact model causes the delamination of all the interfaces as well as the experimental study.Kink-bands observed during experiment are relatively well modeled.The model seems to restore fiber failure, matrix cracking and delamination during the experiment in an adequate way.From a quantitative point of view, the force–time curves show a relatively good correlation in terms of total impact time and force fall; the phenomenon is thus well restored in time for the two stacking sequences.The maximum force is systematically underestimated for the two stacking sequences. Then it could be necessary in the future to take into account more accurately the passage between crushing process and fiber failure, and in particular to account for the strain rate effect in the behavior law of the fiber failure Projected delaminated area presents a good agreement between experiment and modeling for stacking A whereas stacking B underestimates this delaminated area of 55% on average. Nevertheless the damage form seems faithful simulated for stacking A and B.Finally, a relatively good experiment/model agreement is revealed concerning the results of the parameters retained by industry; the maximum crack length on the edge and the permanent indentation. Once again, higher is the impact energy, longer the crack is.A sensitivity study has been performed to determine the influence of the friction coefficient on the model and to validate the friction coefficient of 0.06. The model with this friction coefficient presents the best experiment–model correlation. The friction phenomenon has an effect on the model results and in particular on the failure form. Part of edge impact model discrepancies could be due to the friction effects.It can be noticed that the model quickly answers in crushing mode and seems to pass in an inadequate way from the dynamic behavior to the quasi-static behavior. In order to correct this problem it seems necessary to implement a strain rate effect in the behavior law of the fiber failure in compression. This work will have to be taken into account in the future.This edge impact model is similar to out-of-plan impact model on a laminate plate Tensile Membrane Action of Lightly-reinforced Rectangular Composite Slabs in FireA recently developed method of treating tensile membrane action of lightly reinforced concrete slabs, based on a rigorous treatment of the kinematics of movement of the yield-line mechanism, has been developed to consider composite slabs with unprotected downstand steel beams in fire conditions. The fire case differs from the enhancement of load capacity of slabs at ambient temperature in the respect that the applied loading is kept constant at a predetermined value, but the strength of the downstand beams progressively declines as their temperature rises. It is assumed that the concrete slab does not become hot enough in its active levels, within the duration of a fire, to reduce its strength. This extension to the method is derived systematically. It is seen that the yield line mechanisms of these slabs are aligned differently from those of the equivalent concrete slabs, so it is not valid to use the latter as the basis of a design calculation. The advantage of finite deflection due to tensile membrane action manifests itself as an enhancement of the steel beam temperature that can be sustained, above that at which the yield line mechanism forms. The peak enhancement occurs at the point at which reinforcing mesh begins to fracture progressively along diagonal yield lines. This fracture can be delayed and the peak temperature increased if the mesh ductility across the yield line cracks is increased by reducing the bond between bars and concrete, thus facilitating the bar-slip from the crack-faces. The effects of using meshes of different ductility classes, and both plain and deformed bars, are considered for composite slabs of different aspect ratios.concrete stress block areas on diagonal and central yield lines.temperature-dependent total tensile capacity of all the downstand steel beams.concrete resultant force across a diagonal yield line.concrete x- or y-direction resultant forces on central yield line.steel mesh strengths per unit width in x and y directions.general reinforcing bar stress and bar stress crossing yield line crack.elastic and plastic bond development lengths.moment of beam forces about y-aligned edge.moments including C and S about y-aligned or x-aligned edge respectively.moments of Cx2, Cy2 about y-aligned or x-aligned edge respectively.total external moment about y-aligned or x-aligned edge respectively.total internal moment about y-aligned or x-aligned edge respectively.moments of Tx1, Tx2 about y-aligned edge.moments of Ty1, Ty2 about x-aligned edge.moment of V about appropriate edge (depends on facet considered).number of unprotected downstand beams across the y-dimension of the slab.number of unprotected downstand beams crossing a diagonal yield line.number of unprotected downstand beams crossing central yield line.value 1 if unprotected downstand beam on centre line of slab, 0 otherwise.dimensionless coordinates of yield line intersection in x- and y-aligned mechanisms.resultant horizontal shear force along a diagonal yield line.tensile resultant forces in x-aligned mesh.tensile resultant forces in y-aligned mesh.movements of a point on a crack-face in x and y directions.resultant vertical shear force across a diagonal yield line.coordinate system: parallel and perpendicular to downstand beams, and through depth of slab.coordinates of concrete stress-block centroids on diagonal and central yield lines.limiting x coordinate of unbroken y-direction reinforcement.x coordinate at which y reinforcement emerges from compressive stress block.limiting y coordinate of unbroken x-direction reinforcement.y coordinate at which x reinforcement emerges from compressive stress block.depths of concrete stress block at slab corner and yield line intersection.reinforcing bar strains: general, ultimate, yield.x and y movements of facets at corner of slablimiting x and y crack widths at which reinforcement fractures.dimensionless limiting crack widths ηx=Δlim,xl and ηy=Δlim,ylmesh depth as a proportion of slab thickness.dimensionless strength ratios λx=fpxfcl and λy=fpyfcldimensionless stress block depths ψ1 = z1/l, ψ2 = z2/lFor reinforced concrete slabs, it is usually justifiable to base the analysis of TMA on an initial optimal yield-line mechanism [], which remains unchanged as the loading, and consequently the slab deflection, increases. This is especially the case for lightly reinforced slabs, for which the yield-line hinges involve essentially discrete cracks through the concrete thickness; once the concrete has fractured the plastic moment capacity at a hinge is much lower than the fracture moment of the uncracked section. The early work on TMA, based on the optimal yield-line mechanism, was carried out during the 1960s and early 1970s []; this work was described in an earlier paper [] by the lead author, and will not be revisited in detail here. The work made virtually no impact on the structural design of concrete slabs, because the large deflections involved violate all normal serviceability limit state criteria in structural codes of practice.Tensile membrane action has returned to research attention during the past two decades because of the need to design structures to avoid disproportionate collapse under hazard loadings of different types. In such cases, large deflections become unimportant if collapse is avoided. The most usual scenario involving local damage to a framed structure because of an explosive device or impact is loss of a single ground-floor column, requiring its vertical load to be redistributed through beams and slabs to adjacent columns. Since this increases at least some spans very considerably, large deflections are inevitable and TMA is one mechanism that can be utilized in certain damage scenarios to prevent collapse. In fire scenarios, TMA has been seen to prevent collapse of composite, rather than reinforced concrete, floors when the strength of their downstand steel secondary beams has been highly degraded at high temperature. The prime illustration of the effectiveness of TMA in preventing collapse of composite floors in intense fire conditions was in the full-scale fire tests at Cardington [], which gave rise to a fire engineering design strategy that can be used to optimize the usage of passive protection on downstand beams. In this strategy the necessary vertical support condition is established by protecting the primary and secondary beams on the column gridlines around the edges of a slab panel while leaving its internal secondary beams unprotected. This strategy is illustrated in (a) that there is continuity across the protected beams except around the building perimeter. However, it is usually considered unsafe to make use of this continuity in design, because it is very likely that reinforcement will fracture over support beams due to high hogging moments; it is also impossible to guarantee that the slab will remain composite with the protected beams when this happens. Hence a rational design model is the single-panel one illustrated in (b), which uses the transverse support provided by the edge beams, but ignores its in-plane and rotational restraint.] the first author of this paper introduced a large-deflection treatment of the behaviour of thin, lightly reinforced concrete slabs after an initial yield-line mechanism has been formed at a certain load intensity. This rigid-plastic bending mechanism has infinitesimal deflection, and the optimal shape of the mechanism effectively fixes itself for the larger displacements, which cause TMA. This is true because the yield-line cracks for such lightly reinforced slabs are discrete, and the net bending strength of the slab away from a yield line is much higher than that at the yield line itself. The main emphases of this method are:To compute the in-plane movements of the facets from horizontal equilibrium of the resultant forces acting on the slab facets. These forces derive from the strength of the concrete areas of the contact zones and the net plastic tensile forces of the unbroken reinforcing mesh across the yield lines.To allow the orthogonal mesh bars in each direction to fracture at the point where the crack-width resolved into the appropriate direction reaches a given value.To establish transverse equilibrium by equilibrating the transverse forces and external load at high deflections.The fact that mesh can fracture across yield lines creates a natural limit to the enhancement of yield-line capacity that can be generated. The key parameter in making TMA effective is then the ductility of rebar between the faces of a discrete crack; higher ductility gives higher enhancement of capacity.The effects of attaching downstand beams to a concrete slab panel are twofold. Firstly, the bending section becomes composite, with the steel section carrying tensile stress, and an effective width of the concrete slab acting solely in compression; this gives a greatly increased moment capacity compared with those of the two elements involved. Secondly, in terms of the combined areas of steel in either slab direction, it effectively makes the panel highly orthotropic, except at very high steel temperatures when its contribution becomes very small. The degree of orthotropy of a slab is one of the key parameters that determine the geometry of its optimum yield-line mechanism, including its alignment, and for a composite slab this depends on the critical steel temperature at which the mechanism forms. This change of yield-line mechanism is illustrated in Prior to the formation of the yield lines at the critical steel temperature, neither the alignment of the mechanism nor the details of its geometry can be known, and so both possible alignments have to be investigated. The alignments will each be denoted in terms of the directions of their middle yield line. The essential details, including the force resultants along the yield lines, needed to establish equilibrium of the x- and y-aligned mechanisms are shown in In these figures the compressive forces C, Cx2 and Cy2 are the resultant forces from the concrete stress blocks on the yield lines (half-length yield lines in the cases of Cx2 and Cy2), assuming that the concrete acts with a uniform compressive strength. The tensile forces Tx1, Tx2, Ty1 and Ty2 are the resultants from the reinforcing mesh at yield, over the lengths within which it is unfractured and in tension. Shear forces S are present along the diagonal yield lines. There are nb identical downstand steel beams aligned in the x-direction, uniformly spaced across the y-dimension of the slab at a spacing l / (nb + 1). These are assumed to be unprotected and to be at the same temperature at any time, so that they each have a tensile force at the temperature-dependent yield strength of the structural steel. The total of these beam forces is denoted as BT, so each individual beam force is denoted as BT/nb. It is assumed that the edges of the panel are vertically supported but do not have steel downstand beams attached.A pair of equilibrium equations can be generated for either type of yield-line mechanism by resolving the forces for Facet 1 in the y-direction and for Facet 2 in the x-direction. For the x-aligned mechanism this gives:These forces need to be related to the kinematics of the displacements and rotations of the two plate facets at any transverse displacement.A position along a diagonal yield line, located at zero deflection by x and y coordinates which are compatible with the angle γ, and by a depth z from the top-surface of the slab, separates at finite deflection into a point on the crack-surface of Facet 1, which can move by v in the y-direction and a point on the crack-surface of Facet 2 which moves by u in the x-direction. These movements are shown in , and apply to either x- or y-aligned mechanisms.The equation of the neutral axis on each diagonal yield line is given by setting either u = 0 in Eq. , to indicate the position at which the facets cease to intersect. This gives the straight line:The z-coordinates of the ends of the neutral axis, at the corner of the slab and at the intersection of the yield lines, are denoted as z1 and z2, given by setting x to zero and to the x-coordinate of the intersection, respectively.The central yield line clearly retains the neutral axis depth at the intersection, z2, along its length. Eqs. shows the progressive change of shape of the concrete stress block for an x-aligned mechanism as the deflection of the yield-line mechanism proceeds, with z1 generally increasing and z2 decreasing from their initial equal values. The stress block on the central yield line diminishes as z2 decreases until it vanishes altogether when z2 becomes negative. At this stage, the stress block on each diagonal yield line becomes triangular; under some circumstances, it may eventually become trapezoidal, as shown in (e). Other special events are illustrated in (d). Reinforcing mesh will fracture completely at some value of crack separation (which may differ for the two bar directions), and this fractured zone will change progressively as the deflection increases. A similar sequence also occurs for y-aligned mechanisms.It is assumed that x- and y-direction reinforcing bars fracture at crack separations Δlim, x and Δlim, y respectively, at the levels of these bars. The values of these limiting crack-widths depend on the bars' fracture strain (ductility), by its bond relationship with the surrounding concrete and by the proximity of adjacent anchor-points such as welds to orthogonal bars. Alternatively, they may be based on experimental testing. The coordinates beyond which the bars are fractured can then be defined from the limiting crack-width at the level of the appropriate bars, using Eqs. For reinforcement in the x-direction the limiting coordinate for fracture is:for the x-aligned mechanism, up to a maximum value of l/2, or:for the y-aligned mechanism, up to a maximum value of nyl.For reinforcement in the y-direction the limiting coordinate for fracture is:for the x-aligned mechanism, up to a maximum value of nxl or:for the y-aligned mechanism, up to a maximum value of rl/2.Clearly, when xlim, 1y ≤ 0 or ylim, 1x ≤ 0 then the corresponding bars have completely fractured across the diagonal yield lines.In addition, when the concrete stress block crosses the depth of a layer of reinforcement, the bars that fall within the stress block become inactive. In this case the limiting coordinate at which mesh is no longer in tension is:for the y-aligned mechanism. If the stress block lies above the mesh level then yt, 1 = 0.It is now possible to define the forces which maintain the in-plane equilibrium of the slab facets, given that the mesh is assumed to act at yield over the distances where the bars are intact and in tension, with strengths per unit width of fpx and fpy. Concrete within the stress blocks is assumed to act at its compressive strength fc, with its resultant forces acting at the centroids of the stress blocks, which are shown on . The individual force components shown in The only other component of the in-plane equilibrium equations is the total longitudinal force BT in the downstand steel beams, which is temperature-dependent.Given these relationships, the in-plane equilibrium Eqs. at any specified steel beam temperature are functions of the rotation angle θ and the neutral axis depth z1. At any specified deflection, defined by the angle θ, and steel beam temperature T, the in-plane equilibrium equation can be solved for z1. This must be compatible with the assumptions made with respect to the value of z2 and the widths of slab over which the reinforcing mesh is fractured.Vertical equilibrium, and hence the load capacity of the composite panel at high deflections, can easily be established by equating the external (load-induced) and internal moments within each of the two types of facet about their supported edges. There are equal and opposite resultant transverse shear forces between these facets at each of the diagonal yield lines, which do not affect the horizontal equilibrium equations given in . It is assumed for convenience (although it is irrelevant to the eventual solution) that these transverse shear forces act through the centroid of the concrete stress-block on the diagonal yield line. Because of the inherent symmetry there is no transverse shear force crossing the central yield line. The forces involved in the internal and external moments about the edge axis XX′ for a y-aligned trapezoidal facet are shown in and about YY′ for the corresponding triangular facet in (b). The alternative, x-aligned, cases will be seen to be much less usual for practical design details, and are not specifically illustrated in the figures.The equilibrium equation for a y-aligned trapezoidal facet can be aggregated from the individual internal moments, equated to the opposing external moments. The bottom edge of the concrete slab is selected as the level for the axes XX′ and YY′ about which these moments are calculated. The “internal” clockwise moments about YY′ are:MB=BTnB∑i=1ndilytanγynB+1θ+h21−θ22+nmrly2θ+h21−θ22+nmid2rly2θ+h21−θ22MExt=prly22ny2tθ+rly61−θ22+12−nytθ+rly41−θ22For the corresponding triangular facet the “internal” clockwise moments about XX′ are:The internal moment for the triangular facet about XX′ is therefore:The external moment for the triangular facet about XX′ is:The transverse shear force can be eliminated from these equations if Eqs. are each multiplied by the factor (−MV/MV′) and respectively added to Eqs. . The transverse equilibrium equation is then:MExt−MV/MV′MExt′=MTx1+MTx2+MCS+MCx2+MB−MV/MV′MTy1′+MCS′This gives the enhanced value of the applied load for a case in which the steel temperature is kept constant:For an x-aligned mechanism the equivalent equations for the triangular facet about YY′ are:MB=BTnB∑i=1ndilytanγxnB+1θ+h21−θ22+nmid2nxlyθ+h21−θ22The external moment about YY′ for this facet is:For the trapezoidal facet the corresponding internal moment equations about XX′ are:MExt′=ply22r2−nxtϕ+ly41−ϕ22+nx2tϕ+ly61−ϕ22Again, the transverse shear force can be eliminated from these equations if Eqs. are each multiplied by the factor (−MV/MV′) and respectively added to Eqs. . The transverse equilibrium equation is then:MExt−MV/MV′MExt′=MTx1+MCS+MB−MV/MV′MTy1′+MTy2′+MCS′+MCy2′Since the yield strength of structural steel is assumed to degrade with temperature in the piecewise-linear linear manner defined by EN 1993-1-2 [], it is not feasible to express the load capacity directly as a function of beam temperature and slab deflection. The enhanced value of load produced by these equations at any steel beam temperature and slab deflection may be greater or less than the pre-defined load intensity actually applied to the slab. It is easiest to increase the slab deflection in increments from zero; at each deflection the beam temperature is then iterated until the load capacity is within a prescribed tolerance of the defined loading. In this process, the possibility of multiple solutions is avoided. Hence the enhancement of capacity of composite slab panels with unprotected steel beams by tensile membrane action can be defined as an enhancement of the critical beam temperature at a given applied load intensity by finite deflection. There will clearly be boundaries to the range of applied load within which this enhancement applies:At the lower end of applied loading, a non-composite version of the slab can carry the load, and so beam temperatures can rise above 1200 °C.At the upper end, the composite slab carries the load without any reserve capacity at temperatures up to 400 °C. Although it is possible that, at finite deflections, there may be sufficient enhancement to allow them to carry their loading at higher temperatures, this is impractical since such cases would not be acceptable under ambient-temperature ultimate limit state conditions.For both types of mechanism there are 30 different possible combinations linking:The concrete stress-block shape, corresponding to:The reinforcement fracture state, given by:Intact or fractured bars crossing the central yield line;Intact or partially fractured (“unzipping”) bars in either direction crossing the diagonal yield lines.Given the high ductility of structural steel members, especially at high temperatures, and the typically rather large spacing of shear studs, the possibility of tensile fracture of the downstand beams is not included in these scenarios. The 30 possible cases for the x-aligned mechanism are shown in , and the comparable cases for the y-aligned mechanism are shown in The process for constructing the in-plane equilibrium equations for each of the 30 cases for both mechanisms has been covered in detail in the previous paper, dealing with non-composite slabs without reference to elevated temperatures. For composite slabs in fire conditions, these equations are identical in all their terms except those representing the temperature-dependent forces in the unprotected downstand steel beams. As in the previous paper, it is convenient to present the equilibrium equations in dimensionless terms, and for this purpose the following parameters are defined:In these terms, the in-plane equilibrium equations for the x-aligned cases defined in . The in-plane equilibrium equations for the y-aligned mechanisms defined in it was pointed out that the degree of orthotropy of a composite slab panel with unprotected downstand steel beams changes as the temperature of the steel beams rises. For a slab reinforced with isotropic mesh, and subject to a fixed loading intensity, at any particular downstand beam temperature the composite slab will have an optimum yield-line pattern which is associated with the lowest possible load capacity. Taking into account the strength of the downstand beams the net degree of orthotropy will be least at high temperatures and greatest at ambient temperature. The initial critical temperature, at which the load capacity intersects with the applied loading, therefore fixes the pattern of yield lines for the subsequent tensile membrane action as the beam temperature increases further and the slab deflection increases.The links between the applied load intensity, the critical yield line temperature and the geometry of the optimal yield line pattern are shown in for four slabs of the same length (9.0 m), depth (130 mm) and isotropic reinforcing mesh (142 mm2/m in either direction). The slabs all have downstand steel beams (UKB305 × 165 × 40 in S275) at 3 m spacing, so that their ambient-temperature design capacity, considered as the conventional array of parallel composite beams, is the same in all cases. The 9 m × 6 m and 9 m × 9 m cases respectively represent the corner and internal slab panels used in the Cardington full-scale tests (Kirby []). The other cases, of 9 m × 12 m and 9 m × 15 m panels illustrate the effects on small-deflection yield-line behaviour of design decisions to remove fire protection from larger areas of floor slab. that as the applied load changes the critical steel beam temperature changes between 400 °C and 1200 °C. For extremely low load intensities, the non-composite slab can carry the loading, and so steel temperatures are effectively irrelevant and can rise above the limit of 1200 °C at which zero strength is assumed. At the other extreme, load intensities above the ambient-temperature capacity are unsustainable, even when the beams have lost none of their strength.The geometry of the yield-line mechanisms associated with the initial critical temperatures is defined in . Because of the inherent orthotropy of these composite slabs, the mechanisms are all y-aligned except for a small range of low load intensities for the 9 m × 6 m slab, which has only a single longitudinal downstand beam, and has very high critical temperatures; this is most easily seen in shows the change of ny for the yield-line mechanisms for each of the slab aspect ratios; in the x-aligned region of the 9 m × 6 m slab's curve ny is represented by r4nx, which is the “virtual” ny given by extrapolating a diagonal yield line to rl/2. In the absolute locations of the yield-line intersections are plotted against load intensity. This shows that, for any given load intensity, the distances of the yield-line intersections from the longitudinal edges of the slab are effectively constant.This behaviour can be rationalized as an array of parallel composite beams behaving very similarly to their behaviour without continuity in the transverse direction. Each composite beam forms a mid-span plastic hinge, with small triangular areas of concrete as the only assistance to the load capacity deriving from the two-way support of the composite slab. The exception to this principle is where the applied loading is very low, and failure occurs at such high steel temperatures that the effect of the steel downstands almost vanishes; in this range, the yield-line behaviour mimics that of non-composite concrete slabs.After a yield-line mechanism has formed at the initial critical temperature appropriate to the slab's loading intensity the slab continues to deflect as temperatures rise further. At various stages, the in-plane equilibrium phase changes, due to movement of the neutral axis depths z1 and z2, and due to fracture of rebar in either direction across yield lines. A typical example of these changes of phase as the slab deflection changes is shown in . This shows the change of critical beam temperature for one of the slabs used in the yield-line temperature studies above; a slab based on the Cardington “corner bay” panels, with overall dimensions 9 m × 6 m and A142 mesh of strength 500 MPa at an average effective depth of 38 mm. The slab has an overall depth of 130 mm and has concrete of strength 30 MPa. The single central secondary downstand beam is UKB305 × 165 × 40 of yield strength 275 MPa, and it is assumed that its temperature is uniform across its cross-section. Results for five fracture crack-widths (1 mm, 1.5 mm, 2 mm, 2.5 mm and 3 mm) are shown. The surface loading intensity is 2 kN/m2; for this loading, the yield-line mechanism is y-aligned. The phase changes for the 1 mm case are shown on , and are annotated with the codes from a1y: The original state at infinitesimal deflection, shown graphically in (a) and (b), in which concrete stress-blocks exist over the whole lengths of all yield lines, and no rebar has fractured.b1y: The state in which z2 has become negative, indicating that concrete contact has ceased across the central yield line, and the concrete stress-blocks on the diagonal yield lines are triangular. There is still no fractured rebar.b1y′: All the x-direction reinforcement crossing the central yield line fractures abruptly. After this has occurred the enhancement of critical temperature resumes.b1y′*: The y-direction reinforcement crossing the diagonal yield lines begins to fracture at the yield-line intersection, and then progressively “unzips” towards the slab corners as the deflection increases. This causes the critical temperature to decrease, and so the point gives the maximum enhancement at which this phase occurs.b1y***: The x-direction reinforcement crossing the diagonal yield lines begins to “unzip” from the intersection.It can be seen that the “unzipping” of the y-direction reinforcement across the diagonal yield lines subsequently reaches the slab corner so that it is completely broken. At a later stage, the x-direction reinforcement also fractures completely. This does not imply complete collapse, because the downstand steel beam is still in place. The effect of allowing greater ductility can be seen by comparing the annotated curve with those for the higher fracture crack-widths. It can be seen that there is a common enhancement curve until the first reinforcement fracture occurs at a deflection, which varies with the prescribed fracture crack-width. The effect of greater pre-fracture ductility is to amplify the effect of TMA to the extent that, at least for this particular loading case and slab definition, for 5 mm fracture crack-width the temperature of the downstand beam can reach 1200 °C without a de-stabilization of the equilibrium of the slab.Since enhancement of composite slab capacity is now defined, for any constant loading intensity and slab deflection, by the downstand beam temperature that can be sustained, it is appropriate to compare the temperature enhancement variations, which are given with deflection for various load levels. This is done in for slabs of three aspect ratios; 9 m × 6 m, 9 m × 9 m and 9 m × 12 m. These are composed, respectively, of one, two and three identical parallel composite beams of span 9 m and the details defined in above. For each of these slab panels a number of load levels are used. The highest loading is just sustainable by the composite slab at temperatures up to 400 °C, when the steel beam retains its full yield strength. The lowest loading is almost at the level, which can be carried by the concrete slab alone, without any assistance from the steel downstand beams; this is equivalent to the beam temperature reaching 1200 °C, at which the steel has no yield strength. Two collections of curves are shown for each aspect ratio, with fracture crack-widths of 1 mm and 5 mm. It should be noted that the yield-line mechanism is y-aligned for all load levels of the 9 m × 9 m and 9 m × 12 m slabs, and for all load levels above 1.73 kN/m2 for the 9 m × 6 m slab. The contrast between the collections of curves for low ductility (1 mm fracture crack-width) and high ductility (5 mm fracture crack-width) is quite clear. For all aspect ratios, it would be possible to take full advantage of TMA, if sufficient ductility is available in the form of an adequate fracture crack-width.It must, of course, be remembered that it has been assumed in the development of this approach that the reinforcing mesh retains its full strength. For hot-rolled bars, this implies that the mesh temperature stays below 400 °C, or for cold-drawn bars below 300 °C []. There are two ways in which mesh can be heated; by conduction through the concrete slab and by radiation penetrating the narrow crack openings of the yield lines.In the case of conduction, for the slabs considered above, the temperature values given in EN 1992-1-2 Annex A [], or those given in EN 1994-1-2 Annex D [] may be used; these are linked solely to EN 1991-1-2 [] standard fire exposure. The mesh location in the slab implies that for a typical composite slab cast on trapezoidal decking mesh temperatures stay below 300 °C at 60 min and below 400 °C at 90 min according to Eurocode 2 []. The corresponding times according to Eurocode 4 [The case of radiation up the opening crack remains to be analysed in terms of the heat flux at the mesh level.In the preceding section, the ductility of the reinforcing mesh has been defined in terms of a fracture crack-width at the level of the mesh. At any location along a yield line the crack-width contains a length of bar which is under uniform stress; in the limiting (fracture) case this stress is at its ultimate value. Within the embedded length of bar the tensile stress must be lower than this at any point at which there is surface bond stress, and so fracture can only occur in the pulled-out length within the crack. Codified representations of the relationship between bond stress and bar slip are not helpful to attempts to find the fracture crack-width, because their focus is on maximizing the bond in reinforced concrete components and ensuring sufficient anchorage, rather than on quantifying slip. Work by Sezen and Setzler [] on concrete column ductility in the vicinity of beam connections under seismic shaking, which again considered bar pull-out at discrete cracks, produced a very simple model of bar slip which was verified against a series of 12 tests conducted by Sezen []. In this method, the stress-strain relationship for the rebar steel is assumed bilinear, with a shallow gradient between the yield and ultimate points. The bond stress within the embedded length from the crack-face is assumed to be locally constant, but to take one of two values, depending on whether the bar strain is “low” (elastic) or “high” (post-yield). This seems sensible, since little damage has been done to the concrete by the small bar strains in the elastic zone, whereas the much higher strains in the post-yield zone, and the large resulting slip movements of the bar surface deformations, cause real damage to the adjacent concrete. The model is illustrated in . The given bond stress values of ub=fc for the elastic zone and ub′=0.5fc for the post-yield zone are only numerically correct in SI units (MPa) and for deformed bars. Given the very simple nature of the distribution of bond stress the two parts of the development length ld and ld′, and subsequently the crack-width can easily be calculated for a bar that has sufficient anchorage from its bond stresses and development lengths: the general bar stress is denoted as fb, bar yield stress is fy and the ultimate stress is fu; bar stress between fy and fu is denoted as fs. When the crack-width is at the fracture level, then fs = fu and εs = εu, the ultimate bar stress and strain values. In this state, the fracture crack-width is twice the ultimate slip from a single crack-face. For welded orthogonal meshes the weld-points constitute physical “anchors” at regular spacing sb, which can provide the reaction force balancing the tension in the bar at the weld-point. Each weld to the orthogonal bars has a strength which is guaranteed under Eurocode 2 to be at least 25% of the bar strength; if the bar tension at a weld-point exceeds the weld strength then the weld will fracture and the distance to the next weld-point will also become available for bond-slip. In this case, the pullout into the crack will increase abruptly when weld breakage occurs.Some typical bar stress distributions are shown in , in which the first transverse bar weld on one side of the crack is positioned at the average distance of sb/2 from the crack-face, and subsequent welds are at the regular spacing sb. For deformed bars which have very high bond stress it may be possible for the whole of the bar's fracture force to be carried by a development length which does not cross the first weld-point, as shown in . Alternatively, the development length may reach this weld-point at bar fracture, but with insufficient bar stress to break the weld, as shown in (b). In this case, the fracture crack-width is slightly lower than when the weld has broken and the additional anchorage force must be carried by the bond stress beyond the weld-point. For low-bond meshes, such as those composed of plain circular bars, a sequence similar to that shown in (c) can happen: even before bar yield the development length reaches the first weld-point, which subsequently breaks. Further welds may break before the necessary development length is reached to accumulate the bar fracture stress in the crack.Both isotropic (“A” series) and orthotropic (“B” series) welded mesh fabrics are available for use in composite slabs. The fabric dimensions given in BS4483: 2005 [. The isotropic meshes are most commonly used in composite slabs. In common with other types of steel reinforcement, three ductility classes are defined in BS4449: 2005 []; their strength and ductility characteristics are reproduced in . The required development lengths and the slip from one crack-face at fracture, which is half of the fracture crack-width, in the absence of positive anchorage from transverse bar weld-points, can be calculated using Eqs. . If this required development length is pre-empted by a weld-point at which the bar force is insufficient to break the weld, then the actual development length is shortened, and the fracture crack-width is reduced. Since the calculation of fracture crack-width is based on a bilinear bar stress-strain assumption, albeit with a low post-yield stiffness, rather than the rigid-plastic assumption, which underpins the yield-line analysis and enhancement calculations, it is advisable to treat the fracture crack-width calculation in a way that underestimates it. In case studies, therefore, the nominal yield stress of reinforcing bars (500 MPa) is treated as the ultimate stress in the fracture crack-with calculation, and the fu/fy ratios specified in , bringing together the values from BS4449 and Eurocode 2, are maintained. This effectively reduces the yield strength for each ductility class from the nominal value, as shown in The bond characteristics quoted by various researchers are summarized in . The elastic-zone and plastic-zone bond stresses for deformed bars are conveniently defined using the Sezen [] model as having magnitudes in SI units ub=fc and ub′=0.5fc respectively. For plain undeformed bars, less consistent results are available; Cashell et al. [] performed a large number of pull-out and flexural tests on reinforced concrete specimens. Although it was acknowledged that a two-zone bond stress-slip model is justifiable, theirs was a single-zone approach with an assumed constant value of bond stress lying between those for the elastic and plastic zones. Although a range of experimental values was found, the very general recommendation for slab design purposes was to use 2 MPa and 1 MPa as single bond strengths for deformed and plain bars respectively, on the basis that deflections at failure would be lower for higher bond stress values. Herraiz and Vogel [], in a study on tensile membrane action modelling, adopted the fib Model Code [] expressions for single average bond stress. For “good bond conditions”, these expressions are 5.6fcm250.25 for hot-rolled deformed bars, and 0.3fcm for hot-rolled plain bars, giving 5.86 MPa and 1.64 MPa respectively for concrete of strength 30 MPa. Kankam [] tested 30 specimens, albeit with bar diameters larger than those used in composite slabs, and quoted single average bond strengths of 6.8 MPa and 1.3 MPa from tests on deformed and plain bars respectively. Giroldo and Bailey [] tested both deformed and plain bars at various temperatures, but presented the results only in terms of a single experimental bond stress-slip curve for each bar type, diameter and temperature. Observation of these curves suggests that their ambient-temperature tests for deformed bars indicate a single average bond stress in the region of 4.5 MPa for 6 mm bars and 5.5 MPa for 8 mm bars. The ambient-temperature curves for plain bars indicate a single average bond stress (more indicative of ub′ than ub) beyond an initial peak caused by fracturing of a weld anchorage, in the region of 1.3 MPa for 6 mm bars and 2 MPa for 8 mm bars. These studies give some support to the Sezen model for deformed bars. On the basis of the average bond stress results for plain bars from these studies, the apparent values for plain bars are between 20% and 40% of those for deformed bars; for the purpose of these studies the values of the two bond stresses are set at the mean of these: ub=0.3fc and ub=0.15fc.As has been stated above, the unbalanced bar stress at which welds to transverse bars break is guaranteed to be at least 25% of the bar strength; for these analyses the weld strength is assumed to be 50% of the bar strength. The effect of this as the stress fs is increased is illustrated in in terms of the anchorage stress f0 at the end of the development length and the development length over which bond slip happens, for A252 meshes (200 mm × 200 mm spacing) composed of 8 mm plain bars of ductility classes B and C. The effect of successive breakage of the bars 100 mm and 300 mm from the crack-face can be seen for both ductility classes, with the eventual, natural development length stopping just short of the weld at 500 mm from the crack-face. For deformed meshes of the same type the much greater bond strength reduces the development lengths considerably; it can be seen from that for A142 deformed mesh no welds break, and for A252 only the first weld breaks. The effect of bar ductility and bond strength on the fracture crack-widths can also clearly be seen from this table.Case studies, which once again feature composite slabs comprising parallel composite beams of the specification used for secondary beams in the Cardington full-scale building fire tests, as defined in , were conducted for panels of three aspect ratios (1.5, 1.0 and 0.75). A single load level of 5.36 kN/m2, the Fire Limit State design load intensity appropriate to the design assumptions for the Cardington building, is used for these studies. As in the previous analyses, the nominal strengths for the concrete and the steel section were used, rather than the higher strengths obtained from materials testing during the Cardington programme. The resulting critical temperatures are plotted against slab deflection in for the three slab aspect ratios and the mesh ductility classes B and C. In general terms it is obvious that the enhancement of peak steel temperature with deflection is highly dependent on the fracture crack-width. It is also clear that high-aspect-ratio panels (with the composite beams spanning in the longer direction) are the most sensitive to this aspect of ductility.The approach outlined in this paper is based on the assumption that an optimum rigid-plastic yield-line pattern is valid for a composite slab with very light reinforcing mesh. Subject to this assumption, the approach enables the large-deflection load capacity to be calculated given any beam yield strength, which is equivalent to calculating the enhancement of limiting beam temperature for a case in which the applied load remains constant. In terms of ISO834 standard fire exposure, this implies that the enhancement can be directly measured in terms of fire resistance time, requiring only a calculation of heat transfer to the unprotected downstand steel beams. This is also true for other exposures such as parametric fires.The most notable current analytical approach to TMA of composite floors is that developed by Bailey et al. [], based on the doctoral research of Hayes []. This assumes that the geometry of the yield-line pattern is that of the non-composite concrete slab, which has been shown earlier to be incorrect for composite panels. For non-trivial deflections a distribution of membrane force per unit length is assumed, which is not capable of changing with deflection and which implicitly assumes that concrete stress blocks always exist over the whole of all yield lines, as well as an assumed central tension crack, which represents the failure condition. The equilibrium of the membrane forces and plastic moments on each slab facet then produce independent enhancements of the yield-line capacity, which are then combined to give a weighted average enhancement. This enhanced load capacity of the concrete slab at any deflection is then added to the high-temperature load capacity of the array of composite beams of which form the panel. This is clearly illogical in a nonlinear system. The deflection at the occurrence of the final tension crack is predicted by superposing deflections due to thermal bowing of a simply supported strip of slab under a linearized temperature distribution onto the “catenary” deflection for a fixed-ended strip under half the ambient-temperature yield strain of the reinforcement. This superposition of deflections of systems with different boundary conditions is not legitimate, but the total is used as the limiting deflection for the capacity enhancement.In the same context, the experimental evidence for the appearance of a purely-tensile short-span crack as a failure state in composite (as opposed to lightly-reinforced concrete) slab panels in fire seems rather slight. Even the post-test observation of a closed mid-span crack across the upper surface of a panel at Cardington [] seems likely to reflect the compression zone of the y-aligned central yield line which would occur in this case, after reduction of the deflection due to cooling. There was certainly no evidence that this crack had opened across the full depth of the slab during the fire. Clearly, there is a possibility that tensile through-depth cracks may appear at extremely high steel temperatures (and therefore for very low load-levels), when the downstand beams have lost nearly all of their strength, across the triangular or trapezoidal facets rotating about the slab edges parallel to these downstand beams. This eventuality clearly needs to be investigated in extensions of the work. In terms of compartment integrity failure, however, the loss of concrete contact across the central yield line, which happens relatively early, provides a more likely route for the fire to breach the compartmentation.The more recent approach of Omer et al. [] for the strength of lightly reinforced concrete slabs is in several ways similar to that presented here. Equilibrium is based on the kinematics of the facets of the optimal yield-line mechanism of the slab, although the concrete forces across yield lines are concentrated at discrete points of contact rather than creating stress blocks whose shape changes with deflection; this is very similar in principle to an earlier simplified approach [] adopted by the first author. A bond-slip model is adopted, and failure is defined by fracture of the reinforcement at the assumed full-depth short-span crack, either at mid-span or at the yield-line intersection. However, it does not consider the effect of reinforcement fracture within the yield lines themselves. The method includes a steel constitutive model with both yield and ultimate strengths, and uses a virtual work solution process to establish equilibrium at finite deflections. Omer includes a process, based on dividing the slab into a series of beam-strips, for including the effect of thermal curvature of the concrete facets. This is discussed in the following section.It is true that the temperatures of the concrete slab, or of the mesh within it, have not been considered so far. It has been shown in that reduction of the strength of the mesh because of direct conduction of heat to it through the concrete is of minor importance for the typical composite arrangement considered. In fact, the effect of differential expansion of bars and their surrounding concrete may have the beneficial effect of reducing bond, and thus increasing the fracture crack-widths. Both effects of this heating mechanism clearly need to be investigated further. Heating of the bars crossing opening yield-line cracks by direct radiation and convection from the fire below is potentially much more dangerous, because this creates a “hot spot”, which does not increase the bar slip from the crack-faces, but does locally weaken the steel within the crack. An investigation of this heat transfer within an opening crack in which the steel is exposed is also clearly needed.Another possible effect due to the temperature distribution through the depth of the concrete slab at any time of fire exposure has been neglected in the development so far. The effect of differential heating, and therefore of differential thermal expansion, through the depth of the concrete facets has not been included. In equilibrating the stresses caused by this differential expansion a “thermal bowing” of the composite panel is caused. A further bowing effect on the facets can be caused by the thermal expansion of the downstand beams, restrained by the shear connection to the underside of the concrete slab, which creates bending moments along the lines of the beams. This extra deflection will certainly increase the lever-arms of the internal forces acting on yield lines in the lateral load capacity calculation given in ] in the only study which has so far attempted to account for thermal curvature, should be adequate at least to establish the essential behaviour. Omer's approach was also to represent the thermal expansion effects as free net expansion and curvatures of beam-strips across the slab facets of plain concrete slabs, tapering to zero curvature at the supported edges, although he states that the separate nature of the curves in adjacent facets causes some inaccuracy in the early stages of heating. A further approach, which seems appropriate, might be to match the initial yield-line deflection of the faceted model with that of the thermally curved slab using its first Fourier coefficient. Different representations of the thermal curvature effects will be tested in extensions of this work.In design terms, it has been seen that the main factor in successful utilization of tensile membrane action in enhancing the load capacity of composite floors in fire, is ductility. This ductility is characterized by the fracture crack-width, which is controlled by the inherent ductility of the bar material and the slip which it can experience from each crack-face. In contrast to the normal principle of reinforced concrete design, it is advantageous to minimize the bond strength, at least in the areas where yield lines form. For welded meshes composed of plain bars the ductility provided by the weld spacing alone may be sufficient to produce an adequate fracture crack-width, and if the welds are weak compared to the tensile strength of the bars then fracture of the first weld on each side of the crack will further increase the ductility of the mechanism. For normal slab, bending resistance the presence of unbroken welds at working loads will ensure that plain reinforcing mesh is still involved, although its stress locally may be marginally lower than it would be with perfect continuous bond. Class C mesh is generally considered an expensive choice, but may be advantageous in some cases; an alternative, illustrated in , is to use mesh with greater cross-sectional area (say A252 in place of A142).The key aspects of tensile membrane action, considered as a structural fire engineering strategy for lightly reinforced composite slabs, have been illustrated in this paper, subject to very standard assumptions. Although some questions remain to be addressed in further work, the principles seem logical for cases where discrete yield lines form because of the localization of cracking associated with small percentages of reinforcement.Comparative analysis of tangentially laser-processed fluted polycrystalline diamond drilling toolsUltrashort-pulsed laser ablation is increasingly applied in various fields of science and technology. For the purpose of processing ultra-hard materials, such as diamond and cubic boron nitride (CBN) composites, lasers have the decisive advantage of wear-free material removal. The availability of high-powered ultrashort-pulsed laser sources enables the efficient applications of tangential processing strategies to generate complex 3D geometries. Compared to the conventionally applied 2.5D volume ablation strategy, the resulting workpiece form tolerance, repeatability, and surface quality is increased significantly and does not depend on the quality of the initial surface. This makes tangential processing an ideal choice for high-precision finishing processes.Ongoing industrial developments towards the applications of hard materials, such as tungsten carbide (WC) and ceramics, automated production, and tighter tolerance create a demand for ultra-hard tool materials, such as diamond and cubic boron nitride (CBN), for increased dimensional stability and tool lifetime. The properties of these materials force conventional tool production technologies, especially grinding, to their limits. Due to its wear-free nature, pulsed laser ablation offers the advantage of highly flexible and precise machining, independent of the mechanical properties of the processed material. This enables the development of new processes for generating ultra-hard tools in a wide range of industrial applications.The outstanding properties of zirconium dioxide, such as hardness, wear resistance, chemical and thermal stability, light-weight and biocompatibility, cause their increasing application as technical elements in various fields including biomedicine, metal forming, turbine construction, bearing technology, jet nozzles, etc. The introduction of laser-processed solid PCD tools with defined cutting edges, as presented in this paper, may significantly impact the use of these materials by enabling efficient and precise processing of small geometries, such as bores which cannot be ground due to limited accessibility.Fluted cutting tools are conventionally manufactured by grinding. The helical groove constitutes the most challenging feature both for the design and for the manufacturing of this tool geometry and is topic of extensive research efforts. Various models are introduced to support this process. Li et al. However, conventional grinding processes suffer from geometric limitations, wear, long processing times and high mechanical loads when applied to ultra-hard materials. For this reason, a number of recent studies investigated the capability of other manufacturing processes, such as electrical discharge machining (EDM), to generate tool geometries and prepare cutting edges in PCD and composite polycrystalline CBN (PCBN). Zhang et al. Another manufacturing process increasingly applied to ultra-hard materials is pulsed laser ablation. Chong et al. A number of recent studies investigated the suitability of grinding and milling processes with diamond tools on ceramic materials in general and zirconium dioxide in particular. Bian et al. Compared to 2.5D volume ablation, the tangential laser process has the decisive advantage that the material removal takes place orthogonally to the beam direction with a comparatively well-defined and stable dimensional limit. Consequently, the resulting geometry is defined mainly by the relative motion between laser beam and workpiece. Timmer The manufacturing process is implemented on a modified EWAG Laser Line machine tool. shows the processing area and the machine coordinate system of the Laser Line. The applied Time Bandwidth Fuego solid-state laser system has a pulse width of τp
< 12 ps over a repetition rate from 0.2 to 8.2 MHz at a centre wavelength of 1064 nm and a maximum average power of 35 W. Retardation plates in the beam path ensure uniform processing conditions by generating a circular polarisation state. The raw beam is expanded through a focus-shifting device. A digital galvanometer scanner enables highly dynamic movement of the laser beam in the processing area (vScan
≤ 2000 mm/s). An f-theta lens with a focal length of f
= 163 mm focusses the beam on a plane parallel to the X–Y machine coordinate system. Gas nozzles in the processing area provide a cooling jet of pressurised air, and an exhaust system removes ablated material residuals. A high precision CNC system with X′, Y, Z′, B′ and C′ axes enables mechanical motion of the workpiece relative to the field of view of the optical axes (U, V and W) in the machine coordinate system.Due to an order of magnitude difference in the cycle time of the controls for the mechanical and the optical axes, these controls cannot be synchronised. Consequently, as described by Dold The manufacturing process is applied to composite PCD tool blanks. The blanks are cylindrical samples, wire-eroded from WC-backed 4 mm PCD rounds, provided by Element Six Ltd., and brazed on steel shafts. The material grade is a small-grain composite polycrystalline diamond material with an average grain size of 5 μm and approximately 93% diamond content. As this material provides high edge retention and is available in sufficiently thick layers, it is ideally suited for the manufacturing of solid PCD tools with defined cutting edges.The entire manufacturing process of the PCD drilling tool is performed in one setup to avoid re-clamping of the tool blank. As illustrated in The laser focus position is adjusted relative to the tool blank by the Z-axis at each step of the process. All laser–material interaction takes place only in the focus plane of the optical system. All but the feature at the chisel edge are produced at the maximum available laser power, which results in a fluence of 6.7 J/cm2. As introduced by Dold et al. a) is processed by the application of a laser hatch generating a stationary circular interaction zone. The target geometry is achieved by multiple linear mechanical motions of the tool blank through this zone with a constant infeed of 50 μm, similar to the movements of a workpiece during a conventional milling or grinding process. The processing of the helical flute (b) is performed by a contour motion of the optical axes that is superimposed with a spiral motion of the mechanical axes. This process is similar to conventional grinding of helical grooves on twist drills as described by Li et al. d) is processed similarly to the helical flute. gives an overview of the applied processing parameters. The produced tools with diameter 2 mm shown in are machined entirely by the discussed approach. The processing time is approximately 4 h per tool.The cutting edge radii and the surface quality of the PCD tools are measured using a variable-focus 3D-microscope. As shown in , the PCD tools are tested by drilling zirconium dioxide (ZrO2, TZP-A), supplied by METOXIT high tech ceramics, stabilised with Yttrium oxide (Y2O3
< 5 wt%) and reinforced with aluminium oxide (Al2O3
< 0.25 wt%). This structural ceramic material exhibits a hardness of 1200 HV. To compare the tool performance commercial CVD-D coated WC drills and a solid PCD tools processed by EDM are chosen as reference. The coated tool exhibits an equivalent geometry to the laser-processed tools. The EDM tools are manufactured from the same PCD-grade as the laser-processed tools, by a similar process as discussed in . The drilling process is interrupted after an interval of 0.5 mm for chip evacuation. These parameters concur with the recommendations from the supplier of the CVD-D coated reference tools. Feed-force measurements are acquired during these experiments with a piezo multicomponent dynamometer. The long-term performance of the tools is tested with two tools of each type.The radii of the primary, the secondary, and the notch cutting edges are analysed on the unused tools and at regular intervals during the drilling experiments utilising the algorithms introduced by Henerichs et al. , the results of the measurements on the CVD-D coated tools show radii between 16 and 18 μm, attributed by the radius augmentation during the coating process. The laser-processed PCD tools exhibit cutting edge radii below 10 μm. Radii below 5 μm can be achieved by the laser process, as shown at the secondary cutting edges. The EDG tools exhibit the sharpest cutting edges with about 4.5 μm radius. Only at the secondary cutting edge, which is of little relevance for the application of a drilling tool, shows a larger radius of approximately 12 μm.The long-term drilling tests indicate a significant difference in the tool lifetime between the CVD-D coated tools and the PCD tools. The comparison of long-term wear on the solid PCD tools processed by EDM and laser is depicted in illustrates the progression of the primary cutting edge wear over the lifetime of the PCD tools. The laser-processed cutting edges geometry are asymmetrical with a convex rake face, which is the result of the waterfall effect typically occurring during tangential laser processing at cutting edges, as indicated by Eberle et al. Roughness measurements are performed on all laser-processed surfaces. The measurements are filtered with a cut-off wavelength of 80 μm and conform to ISO 4288. The results () measured at four regions of interest show Ra-values above 200 nm for the reference tool. This roughness results from the CVD-D coating process, which is also prone to surface defects at multiple locations on the tool surface. The tangential laser process produces homogeneous surface quality, with Ra-values in the range of 150 nm on the PCD tools. The surfaces on the EDM-processed tool exhibit the highest roughness values in the range of 250–300 nm.The measured Rz-values on the PCD surfaces do not exceed 1 μm, which is considerably lower than the average diamond grain size of 5 μm. As Dold et al. compares the feed force progression during the third drilling interval of the fifth bore with the different tools. As the drill entry is completed in the first interval, the entire cutting edge is in contact with the workpiece from the beginning of the third interval. The feed forces can be divided into two phases. The first phase at t
= 1 s is characterised by a feed force peak when the tool comes into contact with the workpiece because a continuous cutting process has yet to develop. This peak is considerably higher for the CVD-D coated tools at 200 N than for the laser-processed PCD tools at 90 N. This is attributed to the higher cutting edge radii of the CVD-D coated tool. The EDM-processed tools with the sharpest cutting edge radius almost eliminate this force peak. In the second phase, the feed forces of the PCD tools stabilise at an approximately 35% lower level than those of the coated tools. This is also attributed to the lower cutting edge radii. The similar force levels of the EDM-processed and the laser-processed tools, in spite of the 10% smaller diameter of the EDM-processed tools suggest a disadvantageous geometry resulting from the EDM process, which may underlie stronger geometric limitations than the other production methods.A strong variance in morphology of the chips resulting from the drilling tests (All tools are applied with the same drilling parameters. Therefore, the depth of cut does not explain these different chip morphologies. According to Bifano et al. This paper introduces the use of ultrashort-pulsed laser ablation to generate characteristic features on fluted drilling tools. By using two optical and five mechanical axes and appropriate tangential strategies, its ability to process complex 3D tool geometries at high precision in ultra-hard materials is demonstrated. The method is applied to produce solid PCD drilling tools with a number of characteristic features. Achieved cutting edge radii and surface roughness are in the range of or below those of a commercial CVD-D coated tool and EDM-processed tools. Comparative tests by drilling in zirconium dioxide demonstrate superior durability of the tangentially laser-processed PCD tools compared to the CVD-D coated tools and an advantage regarding processing forces as well as tool wear compared to EDM-processed tools.While the presented work identifies the possibility of processing hard ceramic materials by PCD tools with defined cutting edges, a variety of aspects remain to be investigated. Future work may examine the performance of different drilling tool geometries and processing parameters for a range of workpiece materials. Furthermore, the process may be transferred to microdrills, milling tool geometries and other tool materials, such as PCBN and monocrystalline diamond. As the design of drilling and milling tools is driven strongly by the geometrical constraints of conventional processes, the development of appropriate design-for-manufacturing rules for laser-processed tools would support the efficiency of the presented process and exploit the potential of new geometries that cannot be manufactured by conventional means. High potential remains for the optimisation of the processing strategies and the development of a suitable roughing process to reduce processing time and to enable cost-effective applications of the tangential laser process.Maximilian Warhanek: practical implementation of the tangential laser process, tool manufacturing, measurement and tool testing, main writer of the article.Christian Walter: Theoretical basis of tangential laser process for complex tool geometries, support during process implementation and supporting writer of the articleMatthias Hirschi: Theoretical basis of tangential laser process for complex tool geometries, implementation of CAM and CNC-control software for the tangential laser processJean Francois Bucourt: Development and production of solid PCD tools produced by electrical discharge machining as benchmark for the laser-processed tools.Konrad Wegener: Theoretical basis of tangential laser process for complex tool geometries, supporting writer of the article, research management and supervision.Maximilian Warhanek completed his studies of mechanical engineering at ETH Zurich focussing on production technology and logistics. In his PhD studies, he specialises on the ultrashort-pulsed laser processing of hard materials, such as diamond and CBN. He is the coordinator of the EU research project DIPLAT (Dr Christian Walter completed his studies of mechanical engineering at the Technical University Ilmenau. After some industrial experience as R&D engineer at Bosch Rexroth, he completed a doctorate at ETH Zurich on the conditioning of CBN tools by different laser processes. To date, Dr Walter is an engineering project manager in the SHL Group.Matthias Hirschi is a trained design engineer and software engineer graduated from a Swiss college of higher education. In his professional career, he specialised on CAD/CAM development, at first at ABB and currently at EWAG AG.Jens Boos is a trained industrial mechanic and studied mechanical engineering at the college of higher education Aachen. After five years experience as research assistant at RWTH Aachen and ETH Zurich, he is a laboratory manager at inspire AG Switzerland.Jean Francois Bucourt Born on 09.12.1951, after studying law, Mr Bucourt took over the family industrial diamond tools factory. After focusing mainly on grinding technology for a long period, he expanded the business to PCD and CBN cutting tool technology. He looks back on 42 years of experience in conception, development, test applications and performance improvement in diamond and CBN grinding, cutting and superfinishing tools.Prof Dr Konrad Wegener studied mechanical engineering at the Technical University of Braunschweig and wrote his PhD thesis on constitutive equations for plastic material behaviours. After an industrial career in the Schuler group, where he managed the engagement of the company in laser technology, he became full professor of production technology and machine tools at ETH Zurich in 2003. He is head of the IWF (Institute of Machine Tools and Manufacturing).Properties and hydrolysis of PLGA and PLLA cross-linked with electron beam radiationRadiation has been used as a processing tool to modify the properties of polymers. The aim of this study is to understand how electron beam radiation, together with pentaerythritol tetraacrylate (PTTA) as a tetra-functional monomer, can alter the properties (i.e. thermal and mechanical) and hydrolysis rates of PLGA and PLLA. The effects of radiation dose and PFM concentration on the physical properties of the polymers were investigated. The results showed that upon irradiation PLGA and PLLA cross-linked, and an increased in gel content was observed. Glass transition temperature (Tg) and mechanical properties of the polymers also increased. Cross-linked PLGA and PLLA samples were found to retard hydrolytic degradation. The mechanical properties of these polymers were also unaffected by hydrolysis. In summary, PLGA and PLLA cross-linked with PTTA were found to have enhanced mechanical properties and were able to retard hydrolytic degradation.Biodegradable polymers are promising materials for a wide range of applications, including biomedical and pharmaceutical. Of these, the aliphatic polyesters, poly(-lactide) (PLLA) and poly(lactide-co-glycolic acid) (PLGA), have been extensively investigated because they demonstrate good toxicological safety and biodegradability Radiation has been used as a processing technique to modify the properties of polymers, either through chain scission or cross-linking. Chain scission generally results in the degradation of the polymer properties, whereas the contrary is true for cross-linking. Recently, electron beam irradiation has been shown to achieve tunable hydrolytic degradation rates from PLGA and PLLA polymers Electron beam radiation was proven to be an exciting tool in altering the drug release profiles of PLGA and PLLA. Recent studies on multi-layer PLGA and PLLA show that irradiated-multi-layer systems elicit degradation characteristics reminiscent of surface erosion A wide range of synthetic polymers can be cross-linked using PFM with irradiation. The addition of PFM to radiation-cured polymers is known to increase cross-linking efficiency The aim of this study is therefore to understand how electron beam radiation, together with pentaerythritol tetraacrylate (PTTA) as a PFM cross-linking agent, can alter the mechanical properties and hydrolysis of PLGA and PLLA. In this study, the effects of cross-linking parameters, such as radiation dose and PFM concentration, on polymer properties will be investigated. Subsequently, the degradation profile and kinetics of these cross-linked PLGA and PLLA polymers will also be reported.Polymers of PLGA (80:20) (PLGA) (IV: 5.01) and PLLA (IV: 8.42) purchased from Purac Biochem (Netherlands) were used in this study. The number average molecular weights measured for PLGA and PLLA were 6.28 × 105
g/mol and 8.9 × 105
g/mol respectively. The cross-linking agent, pentaerythritol tetraacrylate (PTTA), was purchased from Sigma Aldrich (Singapore) and its chemical structure is shown in . HPLC-grade dichloromethane (DCM), purchased from E. Merck (Germany), was used as a polymer–solvent. For the extraction of gel from the irradiated polymers, HPLC-grade chloroform purchased from Tedia (USA) was used. All polymers and chemicals were used as-received, unless otherwise stated.For film preparation, PLLA and PLGA were first dissolved in DCM at polymer–solvent ratios of 1:20 and 1:15 respectively. After complete dissolution, specific weight ratios (4, 6, 8 and 10 wt%) of PTTA were added and stirred for 12 h before casting, with PTTA dissolving well in the polymer solution. The resultant solvent was then cast over a glass plate using a film applicator. Prior to casting, the wet thickness (tw) of the films was adjusted by the casting knife (PLLA to be approximately 1.23 mm and PLGA 1.00 mm) to ensure that the resultant dry thickness (td) would be less than 0.05 mm for ideal e-beam penetration at 175 kV A CB175 model Energy Sciences Inc. (ESI) Electron Beam Accelerator, operated at room temperature and in the presence of air, was used for electron beam irradiation. Films of PLGA and PLLA with PTTA were exposed to radiation doses of 2.5, 3, 4 and 5 Mrad. Subsequently, these films were characterized and further studied for hydrolytic degradation.PLGA and PLLA films cross-linked with 4 wt% PTTA concentration at 3 Mrad were chosen for in vitro hydrolytic degradation studies. These films were placed in 10 ml screw-top bottles filled with phosphate buffer saline (PBS) solution (pH 7.4) and incubated at 37 °C. Samples were removed weekly for characterization, and the pH of the solution was monitored and maintained at 7.4. Films were removed, rinsed with distilled water and surface dried using water-absorbent paper at designated time points. The samples were then dried in a vacuum oven at 40 °C for 5 days, after which the final dry mass (md) was recorded. Mass loss was taken as the difference in the dry mass (md) with respect to the initial mass (m0) of sample. Results for mass loss were normalized by dividing over their initial masses (m0) and reported in terms of percentage.GPC was performed using the Agilent 1100 series gel permeation chromatography, performed at 35 °C with 100% chloroform as solvent, and using reflective index detector (RID) as the detector. The calibration was done in accordance to polystyrene standards and the flow rate used was 1 ml min−1. Average molecular weights of the as-received polymers were measured before cross-linking.After irradiation, gel fraction was measured by first dissolving 50 mg of polymer sample in chloroform at room temperature for 48 h. Thereafter, the solution was filtered and the insoluble portion was vacuum dried at 55 °C for 24 h to ensure that no residual chloroform remained. The final gel fraction was calculated using the following equation:where W0 is the initial dry weight of the cross-linked polymer and Wg is the remaining weight (dry gel component) of the cross-linked polymer after dissolving in chloroform at room temperature for 48 h.Changes to the thermal properties were investigated with the use of a TA Instrument DSC 2920 Modulated DSC apparatus. To avoid oxidative degradation, the samples and reference pans were purged with nitrogen at a constant flow rate of 48 ml/min. Approximately 5 mg of the sample was heated from −20 to 250 °C at a scan rate of 10 °C/min. Degree of crystallinity (DOC) was calculated for PLLA, from the difference between the enthalpy of melting (ΔHm) and the enthalpy of crystallization (ΔHc) divided by the enthalpy of fusion (ΔHf(100%)) of a 100% crystalline PLLA (135 J/g) The tensile strength of the polymer films was measured using an Instron MicroTester 5848 with a pressure jaw grip. The measurements were taken at room temperature using a 2 kN load cell. The crosshead speed used was 2 mm/min. Samples were first cut into a dog-bone shape of gauge length 7.62 mm. At least 25 mm of initial grip separation was kept during test and the test was terminated when the load dropped by 75%. The average result from four samples was taken.For FTIR, degraded film samples were measured directly on the Perkin Elmer Spectrum GX FT-IR System. The scan range used was from 400 cm−1 to 4000 cm−1, performed with 16 scans per sample.The SEM was used to evaluate the phase morphology of the polymer blends and surface morphology of the cross-linked polymers after hydrolysis. Images of 2000× magnification were collected at 5 kV using the JEOL JSM 6340F SEM. The film samples were affixed to a carbon tape and coated with platinum at 20 mA for 80 s using the JEOL JFC-1600 Auto Fine Coater before any SEM characterization was conducted.Gel fractions of cross-linked samples were measured and the results are summarized in . The results show that high gel fractions (>70%) were obtained from both PLGA and PLLA as a result of radiation-induced cross-linking. A common trend was observed across all samples (i.e. PLGA and PLLA) with increasing radiation dose; whereby gel fractions peaked at 3 Mrad before a decrease was observed. The decrease in gel fraction above 3 Mrad could be due to some chain scission occurring in the film at higher radiation doses. Quynh et al. Gel fractions of cross-linked PLGA and PLLA were also observed to increase with increasing PTTA concentrations, implying an increased in cross-linking efficiency. Mitomo et al. Gel fractions were observed to be higher in PLGA than PLLA. Unlike PLGA which is amorphous, PLLA is semi-crystalline with a two-phase system consisting of amorphous and crystalline regions. During irradiation, energy is deposited uniformly and radicals are formed throughout the polymer in both the amorphous and crystalline regions of PLLA The thermal properties of these polymers were also altered after cross-linking. plots the glass transition temperature (Tg) of the both cross-linked PLGA and PLLA. For both polymers, Tg values were observed to increase after cross-linking. The formation of a three-dimensional amorphous network restricted the flexibility of the polymer chains. However, there was no significant difference in Tg with increasing radiation dose and PTTA concentration. However, the contrary was observed for melting temperature (Tm) of PLLA (), where Tm was observed to decrease after cross-linking. The decrease in Tm suggests two possible reasons arising from irradiation. First, the crystalline regions were destroyed with irradiation (chain scission), thus reducing the degree of crystallinity and the size of the crystals, as previously reported that plots the degree of crystallinity (DOC) of PLLA after cross-linking. Here, only the DOC of PLLA-4 wt% and 6 wt% were plotted because these are representative of other irradiated samples. It can be seen that the DOC of PLLA decreased dramatically (70–90%) after cross-linking, affirming that chain regularity had been disrupted. The large molecular chain network therefore reduced regularity of the chains, suppressed molecular motion for crystallization and decreased DOC of the polymer.The Young's moduli of cross-linked PLGA and PLLA are plotted in . The results show that Young's moduli of PLGA and PLLA increased after cross-linking. A similar trend with gel fraction was observed for the Young's modulus, where Young's modulus for PLGA and PLLA generally peaked at 3 Mrad. The various mechanical properties (i.e. Young's modulus, yield strength, and strain at yield) of PLGA and PLLA at 3 Mrad are further summarized in . Similar to Young's modulus, a significant increased in yield strength and strain at yield was also observed after cross-linking. At 3 Mrad irradiation, yield strength and strain at yield was highest at 4 wt% for PLGA and 8 wt% for PLLA. Generally, the increased in mechanical properties were in good agreement with the increased Tg and gel content values as reported earlier on. However, there was no significant difference in the Young's modulus with increasing PTTA concentration.Another interesting observation was that the use of PTTA in cross-linking PLLA resulted in higher mechanical properties as compared to those obtained with TAIC For consistency in comparison, PLGA and PLLA films were cross-linked with 4 wt% PTTA at 3 Mrad for hydrolysis studies. plots the water uptake of cross-linked and non-cross-linked PLGA and PLLA samples. Water uptake generally increased with degradation time across the samples. However, there was no significant difference in the water uptake, across all samples, since values were generally low (<1.5 wt%). Polymer mass loss was plotted in and the results showed that the non-cross-linked samples had higher mass loss. Similar results were obtained by Quynh et al. . This implied that hydrolysis was occurring in the amorphous cross-linked network with degradation time.Mass loss can be attributed to the hydrolysis of the ester bonds and the subsequent leaching of soluble products from the polymer. A lower mass loss from the cross-linked samples indicated that a three-dimensional network was more resistant to hydrolysis. To form leachable oligomers, more bonds had to be hydrolyzed because of the dense network formed through cross-linking. The dense network of inter-connecting polymer chains worked together, almost as if it was a large macromolecule, protecting the ester bonds from hydrolysis. At the same time, the higher Tg () resulted in a polymer that was more resistant to water penetration. However, the more open aliphatic polyesters (non-cross-linked) were more susceptible to forming oligomers, which explains for their faster mass loss. In summary, only the non-cross-linked samples experienced significant mass loss, proving that cross-linking improved the polymer's resistance to hydrolysis. It was also observed () that the rate of decrease in gel fraction was faster for PLGA than PLLA, which can be attributed to the steric hindrance provided by the methyl side group of the LA structure against hydrolytic attack of its ester bonds.Thermal properties (i.e. Tg and Tm) of the polymers remained relatively unchanged with degradation time, as shown from the MDSC thermograms in . The formation of a Tc and Tm peak from PLGA from week 8 confirmed that the amorphous network was undergoing hydrolysis . The hydrolysis of PLLA was however not evident from the MDSC thermograms. Further characterization of the hydrolysis of PLGA and PLLA can be observed from their FTIR spectra (). The peak intensity at 3505 cm−1 of PLGA increased relative to other peaks. Peak intensity for PLGA also increased more rapidly in comparison to PLLA. This increase is due to the formation of –OH group during the degradation process, an indication that the polymer is degrading through hydrolysis summarizes the mechanical properties of these samples up to 12 weeks of degradation. The results show that the Young's modulus and yield strength of the non-cross-linked samples decreased with hydrolysis, while the mechanical properties of the cross-linked PLGA and PLLA samples remained relatively unchanged. Cross-linking therefore retarded polymer hydrolysis and sustained mechanical integrity of these polymers during hydrolysis. The SEM micrographs of the films before and after hydrolysis are shown in . Films before degradation showed good consistency of PTTA/polymer blend, exhibiting a smooth morphology after cross-linking. After hydrolysis, the cross-linked films were found to show some degradation on the surface. Nevertheless, from the SEM, both cross-linked films were found to degrade less significantly as compared to the non-cross-linked films, which generally showed severe surface pitting arising from hydrolysis Cross-linking of PLGA and PLLA with a PTTA, a tetra-functional monomer, increased gel content in the polymer by creating three-dimensional network of polymer chains. Results showed that Tg and mechanical properties of the polymers increased upon cross-linking. Cross-linking at 3 Mrad gave the highest gel fraction and enhanced the mechanical properties for both PLGA and PLLA. Hydrolytically degraded cross-linked PLGA and PLLA showed a retardation of polymer hydrolysis. Hydrolysis occurred in the amorphous network of the polymer, but not at the expense of its mechanical properties. Therefore, cross-linking of PLGA and PLLA not only increased their mechanical properties, but also retarded their hydrolysis.Supercritical geothermal reservoir revealed by a granite–porphyry systemTo understand the geological properties of a supercritical geothermal reservoir, we investigated a granite–porphyry system as a natural analog. Quartz veins, hydrothermal breccia veins, and glassy veins are present in Neogene granitoids in NE Japan. The glassy veins formed at 500–550 °C under lithostatic pressures, and then pressures dropped drastically. The solubility of silica also dropped, resulting in formation of quartz veins under a hydrostatic pressure regime. Connections between the lithostatic and hydrostatic pressure regimes were key to the formation of the hydrothermal breccia veins, and the granite–porphyry system provides useful information for creation of fracture clouds in supercritical geothermal reservoirs.Following the Great East Japan Earthquake and the accident at the Fukushima Daiichi Nuclear power station on 3.11 (11th March) 2011, geothermal energy came to be considered one of the most promising sources of renewable energy for the future in Japan. However, there are several geological and geophysical issues to consider. First is that ∼80% of the potential geothermal energy in Japan lies inside National Parks, second is Onsen (hot springs) problem which is conflict between geothermal developers and Onsen owners due to some misunderstandings of geothermal and hot spring resources, and another is induced seismicity related to the development of geothermal energy. The temperatures of geothermal fields operating in Japan range from 200 to 300 °C (average ∼250 °C), and the depths range from 1000 to 2000 m (average ∼1500 m). In conventional geothermal reservoirs, the mechanical behavior of the rocks is presumed to be brittle, and convection of the hydrothermal fluid through existing network is the main method of circulation in the reservoir. In order to minimize induced seismicity, a rock mass that is “beyond brittle” is one possible candidate, because the rock mechanics of “beyond brittle” material is one of plastic deformation rather than brittle failure (At Kakkonda in NE Japan, the exploration well WD-1a encountered the partly solidified Kakkonda Granite and inferred reservoir temperatures in excess of 500 °C (). The project called DSGR (Deep-Seated Geothermal Reservoir) was conducted by NEDO (New Energy Development Organization, Japan); nevertheless, there were no strong emissions of steam from the bottom of the well. In an attempt to understand the findings of DSGR, we have studied an exposed Quaternary granitoid (the Takidani Granodiorite), since it is analogous to the type of granitoid rock mass that might host a deep-seated (artificial) geothermal reservoir (). From an engineering point of view, the Takidani Granodiorite is a suitable candidate as a natural analog for a HDR/HWR (Hot Dry Rock/Hot Wet Rock) geothermal reservoir, particularly under supercritical geofluid conditions. The Takidani Granodiorite is located at the boundary of the Eurasian and North American Plates (), and extensive silicic magmatic activity (both volcanic and plutonic) occurred through the Pliocene and Pleistocene. In addition, we have investigated hydrothermal activity in order to understand the evolution of supercritical geothermal fluids in certain geological settings. Temperatures over 350 °C are in the “beyond brittle” condition (a temperature of ∼350 °C coincides with the brittle–ductile transition), and the ways in which fractures develop under these conditions are unclear.Porphyry copper deposits represent natural “beyond brittle” analogs where fluids from molten material (magma) infiltrate a ductile rock mass at ∼600 °C, and where lithostatic pressures cause fractures in the rock mass, creating a stockwork fracture system (). The large strain rates during fluid injection released from the host rock render the rock mass brittle, allowing it to fracture in tensile and shear modes. In these porphyry deposits, we are able to observe several kinds of fractures represented by millimeter- to centimeter-scale quartz veins (), where quartz filled and plugged the fractures; apparently the quartz was precipitated during adiabatic decompression and cooling as the fluids traversed from lithostatic to hydrostatic pressure regimes.A granite–porphyry system, associated with hydrothermal activity and mineralization, provides a suitable natural analog for studying a deep-seated geothermal reservoir where stockwork fracture systems are created in the presence of supercritical geothermal fluids. In this paper we describe fracture networks and their formation mechanisms using petrology and fluid inclusion studies in order to understand this “beyond brittle” supercritical geothermal reservoir.The study area is located in central Akita Prefecture, Tohoku District, NE Japan. In the vicinity of the area, volcano-sedimentary rock sequences of Paleogene to Neogene age were deposited around a basement of Cretaceous granitoids. The tectonic setting was one of an intra-rift rise formed during the period of back-arc spreading of the Sea of Japan that started at 28 Ma and continued until 13 Ma. Paleogene sequences since the Eocene are mainly made up of terrestrial andesite lavas with subordinate pyroclastic rocks, and they represent continental margin volcanism prior to back-arc opening. These sequences were followed by volcanoclastics with several basaltic lava flows in the periphery of the study area, as back-arc volcanism continued during the period 20–13 Ma. After 13 Ma, the peripheral area gradually changed to a bathyal environment, but the study area itself remained as a small continental rise, the result of differential uplift and corresponding intrusions of granitoids. Granitic intrusive activity occurred intermittently in the area. In the eastern margins of the area, diorites and dioritic porphyries were intruded during the period 24–19 Ma, and in the western margins of the area similar rocks were emplaced at 7.2–6.0 Ma. Numerous quartz–porphyry or dacite dikes were also emplaced at 11–8 Ma around the granitic complex (). The details of the distribution of granitic rocks, with K–Ar dating results from According to the strict classification of felsic plutonic rocks, most of the Cretaceous, Paleogene, and Neogene granitoids in NE Japan are granodiorites in terms of their compositions, and according to microscopic and XRD analysis. The details of our methods of XRF analysis follow . Many dark enclaves of quartz syenite to adamerite composition are found in the marginal parts of individual granodioritic intrusions, and these are thought to have been derived from the parental magma of the granodiorite. The Paleogene and Neogene granodioritic rocks can be divided into three different types on the basis of petrography. The first is a holocrystalline granodiorite characterized by large hornblende crystals, and it is mainly distributed in the upstream area of Koaizawa (KIZ) and the downstream area of Ohmizuhata (OMH) catchments, as shown in . The second is a granodiorite porphyry, which is distributed along the northern and southern margins of the holocrystalline granodiorite. The boundaries between the holocrystalline and porphyritic rocks are transitional. The third type is quartz porphyry, and it can be found in the marginal part of the Ohmizuhata granodiorite porphyry as dikes that were extruded from the granodiorite porphyry, and some of them appear to be pegmatitic dikes because of the presence of graphic intergrowths and perthite.Almost all the modal compositions of these rocks plot in the granodiorite field in the IUGS classification (after a. The rocks are made up of quartz, plagioclase, hornblende, and some K-feldspar, in descending order of volumetric importance. Biotite was found in only one specimen of the Ohmizuhata granodiorite porphyry. Hornblende is commonly altered to chlorite, and some of the plagioclase is affected by sericitic alteration. The three different rock types of holocrystalline granodiorite, granodiorite porphyry, and quartz porphyry are identified based on variations in the proportions of their minerals and their textures.A plot of norms on the granite system of suggest that the KIZ granodiorite-adamerite was emplaced at somewhat deeper levels than the OMH granodiorite porphyry. The granodioritic rocks (KIZ granodiorite-adamerite in b) lie on the 100–200 MPa cotectic line, whereas the granodiorite porphyries (OMH (granodiorite porphyry)) lie along the 50–100 MPa cotectic line. The quartz porphyries (OMH (quartz porphyry) in b) also plot in a similar area, between the 100 and 200 MPa cotectic lines.The SiO2 versus FeO(t)/MgO diagram, after , shows that all the granitic rocks, including their mafic enclaves, are calk-alkaline (). The fact that the quartz porphyry plots within the tholeiite field may simply be the result of enrichment in iron sulfides due to mineralization.), where the mafic enclaves and the host granitic rocks are connected by tie lines, indicate that all granitoid members form a linear trend due to some genetic relationship. The decreases in major oxides, except Na2O and K2O, with increasing SiO2, would be the result of fractional crystallization. However, the relationships between the enclaves and host granites for Na2O and K2O show a retrograde tendency against increasing SiO2, which could be explained by Na2O easily escaping from the host rocks during plagioclase alteration, and K2O being an incompatible element in shallow-level granitic rocks.Four distinctive types of mineralization are recognized in and around the granodioritic complex (), as follows. (A) Weak copper mineralization accompanied by glassy veins (mentioned later) is found in the holocrystalline granodiorite. Tiny chalcopyrite grains occur along the grain boundaries of minerals in the holocrystalline rocks, and along the margins of glassy veins. The bulk copper assays are up to 100 ppm, slightly higher than background levels. (B) Small but very high-grade Cu–Pb–Zn quartz veins occur where the granodiorite porphyry is in contact with the holocrystalline granodiorite. More than 10 quartz veins, some up to 10 cm wide, contain 3–5 wt% Cu and 9–27 wt% Zn. It is noteworthy that several characteristics of these veins, such as orientation, density, and formation temperature, are common to both the quartz and glassy veins mentioned above. (C) Cu–Mo mineralization in the OMH (Ohmizuhata) granodiorite porphyry is economically important. Disseminated zones of mineralization with 0.3–0.6 wt% Mo and 0.1–0.2 wt% Cu were intersected sporadically over the 800-m length of a previous drill hole (). Chalcopyrite and molybdenite are concentrated in the fracture zones of the granodiorite porphyry. (D) Gold mineralization is also important in the quartz porphyry of the Ohmizuhata area, and assays show 0.8–2.7 g/ton Au in zones ranging from 1 to 5.4 m in width in the strongly silicified parts of the quartz porphyry. Evidence of hydrothermal activity associated with this mineralization can be observed, and it is considered to represent a fossil of geothermal activity in and around the Paleogene to Neogene granite–porphyry system.The granodiorites, granodiorite porphyries, and quartz porphyries all contain several types of veins. shows representatives of these veins, and among these we were able to recognize three types: quartz veins, hydrothermal breccia veins, and glassy veins, as described below.The quartz veins are generally planar, and they are filled mostly with quartz (a). Their widths range from 5 to 20 mm, and the veins cut the foliation of the granitoids. The simple tabular shapes of the quartz veins indicate typical brittle behavior which was involved in their formation. The quartz veins can be observed in all the granitoid types.Hydrothermal breccia veins are mainly found in the porphyritic rocks as discordant bodies with widths of 50–100 mm. Brecciated material includes fragments of the host rock, and the fragments are angular (b). Fragment sizes vary widely, ranging from several centimeters to a few millimeters. The observations indicate that brecciation (in other words, brittle failure) of the host rock occurred during hydrothermal activity. The fact that the corners of the fragments retain their angularity indicates that the associated hydrothermal solutions had only a weak chemical reactivity, because evidence of dissolution was not observed. Additionally, angularity of fragments suggests minimal transport, which means ‘in situ’ brecciation. Apart from the angular fragments, the hydrothermal breccia veins are filled mostly with quartz. classified several types of hydrothermal breccias and veins as tectonic breccias, fault breccias, hydraulic breccias, hydraulic implosion breccias, phreatic breccias, and hybrid breccias. The mechanism of formation of the hydrothermal brecciation in our study area is not clear, but we consider it to have taken place under boiling conditions, because precipitation of quartz occurred simultaneously with very weak chemical reactions during extremely short periods of activity, such as a phreatic episode or earthquake.Dark gray to black glassy veins are found mainly in the granodiorites. Some tiny examples range in width from 1 to 10 mm, but most are 50–100 mm in width, with a preferred orientation. Some of the main glassy veins also appear to have been injected into the host rock, and these field observations suggest that they could be viewed as pseudotachylites (); nevertheless, their mechanism of formation remains unclear. We continue, therefore, simply to describe this type of vein as a “glassy vein”. Glassy veins cut across mafic and fine-grained enclaves, as shown in c, and the glassy veins are cut by quartz veins. The timing of the formation of the hydrothermal breccia veins remains unclear, except that these veins cut the granodiorite. Altogether, these observations indicate the following order of formation and depths for the veins. The glassy veins were the first to be formed at relatively deep levels; the hydrothermal brecciation veins were then formed at moderate depths; and the quartz veins were the last to form during hydrothermal activity.We prepared doubly polished, 100 μm thick plates of the glassy and quartz veins that cut the granodiorites, granodiorite porphyries, and quartz porphyries. All of fluid inclusions show two phase and primary properties. The sizes of the inclusions are always less than 10 μm so that the salinity of the fluids could not be measured. Homogenization temperatures, Th, were measured using a Linkam heating stage. shows histograms of Th for two-phase fluid inclusions in various kinds of veinlets. The median Th of the glassy veins was 343 °C, and the Th in quartz veins in granodiorites with Cu–Pb–Zn polymetallic mineralization was 330 °C. In contrast, Th values in the porphyritic rocks were relatively low, so that the Th in quartz veins in granodiorite porphyries with disseminated Cu–Mo mineralization was 246 °C (median value), and the Th in quartz veins in quartz porphyries with gold mineralization was 245 °C (median value). Higher values of Th were obtained in glassy and quartz veins that cut granodiorites (A & B in ) and lower values were found in quartz veins that cut porphyritic rocks (C & D in ). Those bimodal populations (higher values in A and B, lower values in C and D) indicate that two different processes occurred. Higher temperature fluids were captured within the glassy veins and lower temperature fluids were trapped in the quartz veins. Homogenization temperatures should be converted to adequate pressure conditions. It was not possible to calculate pressure corrections using the salinity of the fluid inclusions in this case, but the highest median value (343 °C) for the glassy veins is considered to represent >500 °C under pressures of 200 MPa (∼7–8 km in depth, which is explained later) (The chemical compositions of the minerals in the granitoids and the veins were analyzed using an electron probe micro analyzer (EPMA; JEOL JXA-8200) in the Graduate School of Environmental Studies, Tohoku University, Japan. For plagioclase and amphibole, the accelerating voltage, beam current, and beam diameter were set at 15 kV, 12 nA, and 1–5 μm, respectively. For analyzing Ti concentrations in quartz, an accelerating voltage of 20 kV, a beam current of 100 nA, and a beam diameter of 5 μm were used (cf. ). Peak titanium was measured at 300 s and background Ti at 150 s, using a PETH crystal, whereas Al and Si were measured at 10 s peak and 5 s background using TAP crystals. Rutile, K-feldspar, and wollastonite were used as standard materials for Ti, Al, and Si, respectively (The temperatures were estimated by applying the geothermometer of for hornblende–plagioclase pairs in the host rocks and veins, or by using plagioclase inclusions in hornblende. The pressures were determined using the Al-in-hornblende geobarometer (), with the pressure-dependent calibration from The temperatures of quartz growth were estimated by using the Ti-in-quartz geothermometer (). The pressure-dependent calibration of is suitable for the range of pressures relevant to our study.The granodiorites are made up of quartz, plagioclase, hornblende, and small amounts of K-feldspar. Black glassy veins are composed of hornblende, quartz, plagioclase, and K-feldspar, together with magnetite, rutile, and titanite as accessories. We applied the hornblende–plagioclase geothermometer to pairs of hornblende and plagioclase in the host rocks. Unfortunately, the grain sizes of those pairs in the glassy veins were too fine for determining the chemical compositions. lists the chemical compositions of the hornblende–plagioclase pairs in the granodiorites, and shows the calculation of equilibrium temperatures and pressures based on the hornblende–plagioclase geothermometer (The possible ranges of temperature and pressure of the host granodiorites lie within 650–700 °C and 3–3.5 kbar (∼300 MPa), respectively. Based on the assemblages of normative minerals described above, the granodiorites (KIZ granodiorite–adamerite on b) plot on the 100–200 MPa cotectic line. The hornblende–plagioclase pair indicated relatively higher pressure condition due to early crystallization in granitic magma. Pressure estimates can be related to emplacement depths for the granitoids (), so that the depth of emplacement in our case might be ∼200 MPa (7–8 km depth at ρ = 2.6). shows the titanium contents of the quartz and estimated temperatures. a shows a glassy vein in granodiorite. It is difficult to identify direct evidence for shearing inside granodiorite, however, brittle deformation (shearing) should have occurred under differential pressure conditions (see b shows an SEM image of quartz (relatively dark) and K-feldspar (relatively bright) between tiny black veins. The irregularly shaped quartz is surrounded by K-feldspar with a mosaic texture. Dark spots indicate the analytical points for Ti in the quartz; however, this particular quartz contains no Ti. c shows an SEM image of a tiny black vein, and the relatively dark grains are quartz. The estimated temperatures for this quartz, as deduced from its Ti content, range from 646 to 787 °C. Taking into account the equilibrium temperature of the hornblende–plagioclase pairs in the host rock (∼700 °C), the formation temperature of the glassy vein is considered to be 650–700 °C. In contrast, the Ti content of the quartz in the quartz–K-feldspar shown in b was zero, and we can put forward two reasons for this. One is that the fluid precipitating the quartz was depleted in titanium, and the other is that the temperature was less than 600 °C, which is out of the range of the Ti-in-quartz geothermometer (). According to the observed textures (irregularly shaped quartz surrounded by a mosaic of K-feldspar), the quartz–K-feldspar aggregates in the glassy veins might have been precipitated from solutions at temperatures less than 600 °C.d shows an SEM image and analytical points for Ti in quartz in a host rock. Some points showed more than 100 ppm Ti, and no Ti could be observed in the adjacent zone. e is an SEM-CL image that shows quartz with brittle failure. Comparing d (SEM image) and e (SEM-CL image), the high-Ti points are in the bright CL zone, and the zero-Ti points are weakly luminescent zones of interstitial quartz filling microfractures. Oscillatory zoning in the CL image has been interpreted as reflecting quartz dissolution and precipitation due to oscillations in pressure ( reported heterogeneous SEM-CL images of epithermal quartz that reflected complex hydrothermal events. The higher contents of titanium indicated igneous temperatures and primary quartz, and the absence of titanium in the interstitial quartz indicated a hydrothermal event associated with brittle failure under relatively low temperatures (<550 °C) in shows the distribution of Si, Ti, Al, Fe, Mg, Ca, Na and K in the granodiorite, including the glassy vein shown in a. Higher X-ray intensities for Si, Al, Fe, Mg, and Ca were recognized in the glassy vein. In particular, the Ca intensity was higher than either the Na or K intensity. These observations indicate that the glassy vein contains a Ca amphibole and epidote.The timing of formation and the relationships between the quartz–K-feldspar zone in the glassy vein and the interstitial quartz in the host grain (quartz) are not totally clear. However, they were not formed at temperatures lower than the quartz veins. Taking into account the analysis of fluid inclusions, the formation temperature of the quartz–K-feldspar zone in the glassy vein is considered to be 500–550 °C, and the interstitial quartz in the host granodiorite might have formed at the same temperatures (In the classical geological sense, a “Deep-Seated Geothermal Reservoir” (DSGR) cannot exist above the plastic temperature of the reservoir rock (>400 °C, depending on the rock type) because connecting fractures are absent in plastically deformable rocks, and the convective transport of fluids under ductile conditions is therefore expected to be weak. The permeability of the Earth’s crust largely governs important processes such as the advective transport of heat and fluid (), and the permeability of the Earth’s crust is extremely heterogeneous, ranging from 10−23
m2 for intact crystalline rocks to 10−7
m2 for well-sorted gravels ( described permeability changes that reflected temperatures and pressures of rocks on and inside the magmatic fluid plume of a porphyry copper system. Crustal-scale permeability shows a dynamic behavior, and noted that volcano–plutonic complexes have the potential to satisfy the criteria necessary for the development of an artificial DSGR.Our study of fluid inclusion microthermometry and the petrological analysis of several kinds of veinlets in a granite–porphyry system have provided us with the following scenario for the development of a supercritical geothermal reservoir.Magmatic fluids moved through a hot granitoid intrusive body, which heated the host rock mass via conduction. Magmatic fluids associated with volatile material ascended under a lithostatic pressure regime. The granodiorite was emplaced at around 7–8 km depth, and the glassy veins formed around the top of the granodiorite (∼5 km depth). The glassy veins contained a great deal of water at a temperature of 500–550 °C, and formed hornblende rather than biotite. The pressure regime was still lithostatic. However, when a magmatic fluid (or a supercritical geofluid) crosses the transition zone between magmatic and meteoric fluid regions, the pressure drops from lithostatic to hydrostatic conditions, and the temperatures drop from 550 °C to less than 350 °C within a very narrow range of depths. This is an episodic event like an earthquake, and described a thermal profile for the Butte porphyry copper deposit (Montana, USA) that mimics an irregular pattern following active fractures at any given time and evolves by discrete cycles of dynamic, transitory, high-temperature hydrofracturing, fluid release, and vein formation that overprints cooler host-rock temperatures. They found that a magmatic-hydrothermal continuum represented in hydrothermal veins, ranging from ∼710 °C to <440 °C. Our study indicated that upper limit of formation temperature of the glassy veins ranges from 650 °C to 700 °C, and then supercritical fluids were trapped around 500 °C to 550 °C. Activities of supercritical fluids within almost same temperature range were recorded in both cases. No economically viable porphyry copper deposits have been found in Japan. Here, in the Kowaizawa–Ohmizuhara area, there is evidence of Cu–Pb–Zn mineralization, including gold mineralization, but the area is still very much in the exploration stage in terms of finding an economic deposit. However, centers of natural porphyry copper associated with mesothermal and epithermal deposits either evolved from a supercritical geothermal system to a conventional and subcritical geothermal system as they cooled, or they maintained a conventional geothermal system above the heat center throughout their lives at shallow levels where temperatures were ≤350 °C.The critical point for pure water is at 374 °C and 22 MPa (), and the critical point shifts to higher temperatures and pressures in the case of brine (), and moves to lower temperatures and higher pressures in an H2O–CO2 system ( noted that the bottom of the WD-1a well in the Kakkonda geothermal field was in a supercritical state. Temperature and depth (pressure) conditions in a supercritical geothermal reservoir strongly depend on the geothermal gradient. In the case of a gradient of 10 °C/100 m, the possible depth for a supercritical condition is ∼4000 m, and it is <3000 m in the case of a gradient of 15 °C/100 m. Important EGS technological problems that need to be solved in the development of a supercritical geothermal reservoir are as follows. How can one drill to touch and penetrate the supercritical region in the subsurface? How can one create fractures under “beyond brittle” conditions? And how can one control the induced seismicity in a supercritical reservoir? Technological advances in these areas are essential even for EGS in a conventional geothermal reservoir. The most important issue is the creation of fractures in “beyond brittle” rock masses under supercritical conditions. The stockwork of veins in granite–porphyry systems provides some hints for creation of fracture clouds in supercritical geothermal reservoirs.The systematic distribution of veins in a granite-porphyry system was investigated in order to understand geological properties of a supercritical geothermal reservoir. Coupled with petrological and mineralogical investigations (SEM, EPMA, SEM-CL and fluid inclusion), we demonstrate the evolution of natural hydrothermal fracturing to form several kinds of veinlets (quartz veins, hydrothermal breccia veins and glassy veins) in the rock mass under super- and sub-critical conditions. The glassy veins are interpreted to have formed in the supercritical fluid reservoir at 500–550 °C under lithostatic pressures, and then pressures dropped drastically. The solubility of silica also dropped, and the quartz veins formed under hydrostatic pressures. Connections between the lithostatic and hydrostatic pressure regimes were key to the formation of the hydrothermal breccia veins.A supercritical geothermal reservoir has great advantages compared with conventional geothermal systems, including high-entropy fluids and weak chemical reactivity (). Granite–porphyry systems can provide important lessons regarding the nature and development processes of supercritical geothermal activities, and they represent possible candidates for natural analogs of supercritical geothermal reservoirs. Vein stockworks and their evolution illustrate the integrated history of fracture networks and fluid connectivity in these system.Multiaxial fatigue design of cast parts: Influence of complex defect on cast AS7G06-T6AS7G06-T6 cast aluminum alloy is tested under tension, torsion and tension–torsion fatigue loading for two load ratios. Basquin’s law and step loading method are used to obtain the fatigue limit under multiaxial loading. Crossland criterion and principal stress criterion considering Goodman idea are compared to evaluate the multiaxial behavior. The influence of complex defects on fatigue limit is analyzed under multiaxial loadings. Several artificial defects are machined on fatigue specimen with different distance between edges. A new definition of the equivalent defect size considering the distance between defect edges is proposed. For both tension and tension–torsion fatigue, the competition between single natural defect and complex artificial defects is observed and analyzed.area percentage of grains within a certain grain size zonemaximum value of the first invariant of the stress tensor (MPa)amplitude of the second invariant of the deviatoric stress tensor over a loading cycle (MPa2)load ratio between minimum and maximum stresses of the loading cycleyield strength at 0.2% plastic deformation (MPa)material parameter in Crossland criterionparameter in principal stress criterion considering Goodman ideaparameter in principal stress criterion considering Goodman idea (MPa)variation of stress amplitude between two steps in the “step loading” procedure (MPa)components in the principal coordinate system (MPa)components in the principal coordinate system corresponding to the stress amplitude tensor (MPa)fatigue limit corresponding to 5 × 106 cycles (or 2 × 106 cycles for tension–torsion loadings with artificial defects) (MPa)fatigue limit in tension with the load ratio R
= −1 (MPa)previous stress amplitude at which the specimen passed 5 × 106 cycles (MPa)fatigue limit in tension with the load ratio R
= 0.1 (MPa)mean stress in tension corresponding to the fatigue limit in tension with the load ratio R
= 0.1 (MPa)components in the principal coordinate system corresponding to the mean stress tensor (MPa)principal stress calculated with the mean stress tensor (MPa)principal stress calculated with the stress amplitude tensor (MPa)principal stress considering Goodman idea (MPa)fatigue limit in torsion with the load ratio R
= −1 (MPa)fatigue limit in torsion with the load ratio R
= 0.1 (MPa)mean stress in torsion corresponding to the fatigue limit in torsion with the load ratio R
= 0.1 (MPa)average grain size of grains within a certain grain size zone (μm)characteristic length describing the defect size, projection of the defect surface on a plane perpendicular to the direction of the maximum principal stress (μm)The fatigue behavior under multiaxial loadings is an important issue in fatigue design of engineering components such as aircraft and automobile. However, the cyclic stress–strain responses under multiaxial loadings are very hard to analyze due to the complex loading path, which lead to the complexity in fatigue behavior description. Many criteria suitable to different loading conditions and materials have been proposed until now. They can be distinguished into three categories: criteria based on stress, strain and energy. A complete review of multiaxial fatigue criteria can be found in Ref. Many approaches have been proposed in order to assess the influence of a defect on the fatigue life. An overview of that problem can be found in Ref. Defect type (inclusion, pore, shrinkage, oxide…).Defect morphology (spherical, elliptical, complex…).Defect position (internal, sub surface or surface).Defect size (function, or not, of loading direction).The fatigue design of a metallic cast part is strongly linked to the casting process. The designer needs to compromise between the fatigue resistance of the component and the allowable defect size due to the process. In order to perform this optimization, a criterion that takes into account for defect influence on the fatigue limit is necessary. Murakami The following points are addressed in this paper. First, fatigue tension, torsion and tension–torsion tests were performed on specimens with and without defects and the fatigue limits under different loading conditions were identified. Second, the identification and error analysis were done for two multiaxial criteria: Crossland criterion and principal stress criterion considering Goodman idea. Third, the influence of natural (shrinkage cavity), single and complex (3 defects under tension and 9 defects under tension–torsion) artificial defects on fatigue limit were analyzed and compared to that of defect-free specimens. A criterion was proposed to calculate the equivalent defect size for complex defects under fatigue tension. The competition between a single natural defect and complex artificial defects under uniaxial and multiaxial fatigue loadings was observed and the explained using Kitagawa diagram.The material is a cast aluminum alloy AS7G06-T6. Its chemical composition is given in . The material was supplied as cast as a bar of 270 mm length and 30 mm diameter. Two cylindrical specimens were machined off the bar. All specimens (tension, torsion and tension–torsion) have the same gage section: useful length of 20 mm and diameter of 10 mm. As for the casting procedure, the cast aluminum alloy AS7G06 T6 obtained by gravity die casting. 95 kg 100% new ingots were casted in each casting, which correspond to 40 specimens. An electric furnace was used. The casting temperature was 720 ± 10 °C and the mold temperature is 350 °C when closing die. Opening and shake-out lasted 120 s after filling. Hydrogen degassing and oxide removing were done by argon injection into aluminum bath (5–7 l/min during 7 min, before and after composition corrections) with 0.1% deoxidation flux COVERAL GR 2410 and a 0.05% degassing flux. Besides composition, the temperature, porosity rate and density were also controlled during casting. The thermal treatments in the T6 condition are: heating to solution at 540 °C for 10 h, quenching in cold water, waiting 24 h at room temperature, then returning to 160 °C for 8 h.Electron Backscatter Diffraction (EBSD) measurements were performed on the specimen surface. EBSD scans were performed in beam control mode with a spatial resolution of 5 μm/step. The zone (6.0 × 5.0 mm2) scanned in SEM contains 1449 grains. As shown in a, the grain size (diameter Φ of a disk having the same area as the grain) of the material varies from 28 to 1305 μm. Its average grain size is 259 μm, with a standard deviation of 215 μm. The distribution of grain area percentage for different grain sizes is also shown. The total area of the grains within each grain size zone is cumulated and divided by the total grain area in the observation zone. It can be seen that the grains between 500 and 600 μm occupy 17.4% of the total area, and the grains in the zone Φ
∊ [300, 800 μm] occupy about 3/4 of the total grain area. The average grain size considering the grain size percentage can be calculated below:where Φi is the average grain size of grains within a certain grain size zone and fi is the area percentage of grains within a certain grain size zone. The average grain size considering grain area percentage is 573 μm. The grain size may play an important role in fatigue mechanisms. This is maybe because the grain boundaries act as natural barriers of crack propagation and thus required an additional energy to propagate the crack in the next grain The observation by optical microscope reveals the microstructure of the material at a smaller scale. A dendritic structure is observed (solid solution primary α and eutectic Al–Si surrounding), as seen in b. Shrinkage cavities could also be seen in b. The measurement the Secondary Dendrite Arm Spacing (SDAS) was done by dividing the distance of several secondary dendrite arms by the number of arms. Only the dendrites with at least 6 arms have been used in the measurement in order to reduce the measurement error. 155 dendrites were measured. The SDAS varies between 26 and 57 μm, following a normal distribution. 3/4 of the SDASes are located in the zone [31, 43 μm]. The average SDAS is 38 μm, with a standard deviation of 6 μm.The specimens were observed by Non Nondestructive Testing (NDT): X-ray (XR) and Die Penetrant Liquid. The numerical X-ray detection has been done using the specification NF EN 12681 and the equipment Tube Yxlon Y.TU/320-D03. The detecting thickness was 30 mm. The voltage was 90 kV. The focus-to-film distance was 1 m; the geometric unsharpness was 0.15 mm; and the angle of incidence was 90°. The specimen was exposed for 30 s in the intensity 5. The visible Image Quality Indicator (IQI) was W12 (0.25 m) following the specification NF EN 462-1. The Die Penetrant Liquid detection was done following specifications NF EN 571-1 and NF EN 1371-1. Visible lighting was used, with residual under UV of 6 lux and under ultraviolet of 14 W/m2. A fluorescent penetrant of sensibility S2 was adopted. The impregnation time was 20 min and the temperature was 22 °C. The penetrant was later eliminated with water and air, under a pression below 2 bars. After a drying procedure under 45 °C for 3 min, a dry revelator was applied. After 10 min, the specimen was ready for examination. The recording time was controlled to be under 30 min.7 Specimens have been used to analyze the size distribution and the porosity ratio of shrinkage cavities. The plug-in Fiji “Labeling 3D” was adopted to identify the size of each cavity in the stack, whereas the plug-in “fracz” which ran through each slice of a stack (there are 2,400 slices per stack) was used to analyze the volume fraction of shrinkage cavities. 1,372 shrinkage cavities were detected. Over 88% of the analyzed pores have a volume between 20,000 and 60,000 μm3. As for the porosity ratio, 16,800 slices in 7 stacks were analyzed. The pore ratio in terms of volume fraction is 0.00181%.Sample tested are classified as grade 1 according to ASTM E155 In order to get the fatigue limits of AS7G06-T6 under multiaxial loadings, tension, torsion and tension–torsion fatigue tests have been performed. Tension fatigue tests were performed by the means of Amsler vibrophore (electromagnetic resonance machine) under force control. The test frequence was 108 Hz. The drop of test frequence (5 Hz) was adopted as the stop condition of fatigue tests. When a test is stopped under this condition, the sample is almost broken and contains a macroscopic crack deeper than half diameter of the sample. Fatigue tests were conducted at two load ratios: R
= −1 and 0.1. Tension and tension–torsion tests with the load ratio R
= 0.1 were conducted on a servo-hydraulic fatigue testing machine – Instron 1343. This machine model enables both static and fatigue tests with a dynamic capacity up to ±250 kN. The tension–torsion tests have been made with the same stress amplitudes under traction and torsion. Torsion tests with the load ratio R
= −1 were performed on a MTS servo-hydraulic machine of the model 809 with axial/torsional test system, whose maximum force capacity was 100 kN. The test frequence was 10 Hz. The tests were conducted under control of maximum stress. For all tension, torsion and tension–torsion tests, the loadings were sinusoidal. Fatigue limits are given using the amplitude defined as the maximum stress minus the mean stress over the load cycle.In order to be able to produce Kitagawa type diagrams where σD0 is the previous stress amplitude that the specimen passed 5 × 106 (or 2 × 106) cycles, and Δσ is the variation of stress amplitude between two steps (Δσ
= 10 MPa here).. For the load ratio R
= −1 (0.1), 15 (10) experimental points are used in the identification in tension tests, and 10 (4) points are used in torsion tests. Results obtained in are supposed to be “defect free” in the sense of an industrial classification: class 1 ASTM E155 Al alloy, shrinkage cavity, 1/4 inch plus Die Penetrant surface examination. They will be used in the following as base for comparing multiaxial criteria and also as the reference “defect free” material on the Kitagawa diagram to study the influence of the defect. These results are in agreement with others on identical or very similar materials shows the S–N curves for the alloy AS7G06-T6 under tension and torsion for different load ratios R
= −1 and 0.1. By using the Basquin’s law, the fatigue limits under the four loadings above for 5 × 106 cycles can be identified and listed in shows the determination procedure of the fatigue limit in tension–torsion for a load ratio R
= 0.1. Among the four tested specimens marked “defect free”, the specimen with no defect failed at 40 MPa. The other three specimens failed from shrinkage cavity. One of them failed at 50 MPa from a shrinkage cavity of 420 μm, the second and the third one failed at 40 MPa from a shrinkage cavity of 285 and 187 μm separately. It will be presented later that for these defect sizes, a natural or artificial defect has no influence on the fatigue limit for AS7G06-T6, so the experimental results from the four specimens with small shrinkage cavities or without defect can be used in the identification of the fatigue limit in tension–torsion. For the specimen with no defect and failed directly at the first loading step, the fatigue limit can be identified using the Basquin’s law. For the other three specimens, the fatigue limit is the mean stress amplitude of the last two loading steps. The mean value of the fatigue limit for “defect free” specimens is 37.5 MPa.The fatigue limits of 5 different load types and two load ratios identified above are listed in . For load ratio R
= 0.1, the mean stresses corresponding to fatigue limits are also given.The calculation of J2,a is obtained by a double maximization over the loading period J2,a=122maxti∈Tmaxtj∈T(S‾‾(ti)-S‾‾(tj)):(S‾‾(ti)-S‾‾(tj))α can be identified in different ways. For example, it can be identified using the fatigue limits both in tension and torsion with the same load ratio R
= −1 or 0.1 to focus on the behavior under multiaxial loadings. In this study, in order to consider the influence of load ratio, α was identified using the fatigue limits under tension with two load ratios R
= −1 and 0.1: σD−1,a and σD0.1,a. α
= 0.801, σcr= 76.8 MPa. The simplification form of the Crossland criterion and the identification of the parameter α can be found in Ref. shows the σ11,a
σ12,a curves simulated with Crossland with the fatigue limits under tension, torsion and torsion for different load ratios. As the identification of parameter α was done using σD−1,a and σD0.1,a, the simulated curved passed these two experimental points. In torsion, the simulated σcr was slightly underestimated for the load ratio R
= −1 and slightly overestimated for the load ratio R
= 0.1. It is slightly larger than the experimental value under tension–torsion for the load ratio R
= 0.1, too. More details will be given in the following error analysis. shows the error analysis for the Crossland criterion for 5 different load types presented in where σeq,exp is the Crossland equivalent stress calculated with Eq. for each load type with the fatigue limits identified experimentally. And σeq,cal is the Crossland equivalent stress calculated using load type 1 or 2, which equals to 76.8 MPa. As the parameter α in the Crossland criterion was identified with σD−1,a and σD0.1,a, the errors for specimens 1 and 2 were zero. It can be seen that the errors for torsion (R
= 0.1) and tension–torsion (R
= 0.1) are reasonable: −11.8% and −17.1% respectively. The average value of the absolute errors for 5 load types |Error‾| is 6.6%. This shows that Crossland criterion is relatively accurate to describe the multiaxial behavior of cast aluminum. However, as can be seen in this study, when the parameter in Crossland criterion is identified by the experimental results in tension, the criterion will give higher error in torsion. Moreover, the error for the load ratio R
= 0.1 are higher than that for R
= −1 in torsion, too. In order to simulate the behavior of AS7G06-T6 under multiaxial loadings and have a better consideration of the influence of mean stress, a new criterion based on the principal stress criterion will be compared to Crossland criterion.The principal stresses are the components of the stress tensor when the basis is changed in such a way that the tensor is diagonal. The principal stress criterion with respect of the method of Goodman is based on the principal stress criterion. However, the new criterion considers also the influence of mean stress, as inspired by Goodman idea. As presented in Eq. , a stress tensor can be divided into two parts, one part of stress amplitude, and the second part of mean stress. These two parts are then used to calculate the corresponding principal stresses. They are considered to have a linear relation, and the experimental results in can be used to identify the two parameters α1 and β1 in Eq. . The principal stress criterion with respect of the method of Goodman is then defined as in linear relation with the principal stress calculated using the mean stress, as seen in Eq. σ¯¯σ11σ12σ13σ12σ22σ23σ13σ23σ33=σa,11σa,12σa,13σa,12σa,22σa,23σa,13σa,23σa,33+σm,11σm,12σm,13σm,12σm,22σm,23σm,13σm,23σm,33σa,11σa,12σa,13σa,12σa,22σa,23σa,13σa,23σa,33→σa,I000σa,II00σa,III→σpr=max{σa,I,σa,II,σa,III}σm,11σm,12σm,13σm,12σm,22σm,23σm,13σm,23σm,33→σm,I000σm,II00σm,III→σm,pr=max{σm,I,σm,II,σm,III}σp=α1·σm,pr+β1→identification ofα1andβ1σpr,gdm=α1·σm,pr+β1 shows the identification of the parameters α1 and β1 using two experimental sources. The solid line was identified using 5 fatigue limits listed in , while the dotted line was identified using only σD−1,a and Rm. The latter simpler identification method was analyzed in order to see whether it can be used to replace the first one. It can be seen from that the two methods give similar parameters α1 and β1. As tension–torsion tests were done only with the load ration R
= 0.1, there are no experimental results for σpr,gdm< 50 MPa. The error analysis for 5 load types using the new principal stress criterion is shown in . The new criterion gives large error for load type 4 for both two ways of parameter identifications. The error was calculated using Eq. , where σeq,exp is the equivalent stress σpr calculated with Eq. for each load type with the fatigue limits identified experimentally, and σeq,cal equals to σpr,gdm calculated with each σm,pr in Eq. . With the parameters identified considering all 5 load types, both positive and negative errors can be seen. The average absolute error for 5 load types |Error1‾| is 9.3%. If the parameters are identified using only σD−1,a and Rm, the errors are either zero either negative. For load types 1, 2, 3 and 5, the absolute values of errors are below 20%. The average absolute error for 5 load types |Error2‾| is 15.2%, which is much larger than |Error1‾| and |Error‾|. It seems that the new principal stress criterion considering Goodman idea is less accurate for the material.In this section, Crossland criterion and principal stress criterion considering Goodman idea were tested for AS7G06-T6 under 5 load types. Although the parameter in Crossland criterion was identified using two fatigue limits in tension, this criterion can give reasonable results in tension, torsion and tension–torsion for two load ratios R
= −1 and 0.1. The principal stress criterion without consideration of Goodman idea was also tested. However, this criterion seemed overestimated the influence of mean stress on fatigue under multiaxial loadings and gave an average absolute error over 20% for the 5 load types. So it is less accurate than Crossland’s one for AS7G06-T6. The principal stress criterion considering Goodman idea corrected the influence of mean stress effectively. However, as it also gave much larger errors compared to Crossland criterion, it is not ideal for AS7G06-T6, either. Crossland criterion is thus adopted for the material under multiaxial loadings.In order to estimate the influence of complex defects on fatigue limit under tension, 3 artificial defects were produced using the spark erosion machining. A copper wire carrying a current generates a high intensity electric arc that melts the material locally and machines desired default. To obtain a spherical defect, the defect depth is equal to the diameter of the wire. In order to study the influence of the ligament, the defects were made on the surface of specimens with different distances between defect edges. a–c shows the photos from binocular microscope. With the same defect diameter and depth (≈400 μm), the distance between defect edges dedge varies between 650 μm and 0. For dedge= 0, three defects have become one big defect. The positions and shapes of defects can also be seen in In this study, the parameter area proposed by Murakami In order to identify the crack initiation sites, the fracture surface has been observed after failure, as seen in d–f. In this study, most cracks initiated from the three artificial defects. However, there was the competition between the artificial defects and natural defects (shrinkage cavities). This will be presented at the end of this study for discussion. The experimental results in this study are also compared to those in Refs. shows the values of fatigue limit as a function of the distance between defect edges under tensile loading using the “step loading” method, see Eq. . It can be seen that for a defect size of 400 μm , if the distance between the edges of two defects dedge> 400 μm, there was no influence on the limit fatigue, the three defects can be regarded as isolated defects; if dedge< 200 μm, the fatigue limit was reduced. This result shows that the ligament does have an influence on the fatigue limit, as long as dedge is small enough (dedge< defect size). So a method should be proposed to calculate the equivalent defect size.For specimens with 3 surface defects, when dedge is large enough, there will be no interaction between defects and they can be considered as isolated ones. On the contrary, when dedge is smaller than a certain value (≈area of one defect), the ligament between defects will be very thin and fragile so that the defect should be considered bigger and its size should be calculated using the analytical method in Ref. If dedge<area of one defect, the 3 defects are considered as a big defect and the ligament can be considered in the calculation of the size of the big defect.If dedge⩾area of one defect, the 3 defects are considered as 3 isolated defects. shows the Kitagawa diagram for 3 defects in comparison with the experimental results of single defects in Ref. the fatigue limits for two big shrinkage cavities can be estimated. Their defect sizes are 508 and 755 μm and they both failed at σmax between 89 and 111 MPa, which are in good agreement with the results of the artificial defects of the same sizes. So it can be concluded that using the parameter area to characterize the defect size, a shrinkage cavity has similar influence on fatigue limit as a spark erosion machining defect of the same size.In order to analyze the influence of complex defects on the fatigue behavior of AS7G06-T6 under biaxial loading, the specimens with 9 defects have been tested under tension–torsion loadings. The 9 defects have been made by spark erosion machining method with an identical defect size (area) of 400 μm. However, the distances between defect edges varied between 840 and 200 μm, as seen in shows the Kitagawa diagram for tension–torsion tests. Tests were conducted with specimens with or without defects.All tests were done under the same conditions. The stress amplitudes were the same both in tension and in torsion, with the load ratio R
= 0.1. The “step loading” method was used to obtain the fatigue limit. The fatigue limit is the mean value of the stress amplitude at which the specimen failed in less than 2 million cycles and that of the stress amplitude of the last stress level at which the specimens underwent 2 million cycles.4 Stress levels were applied in total. The experimental results have been slightly modified in the figure for the reason of presentation. For the specimens with one defect of 600 or 900 μm and 9 defects of 400 μm, the same results can be obtained. The fatigue limit was about 35 MPa. The specimens with a defect of 400 μm failed under the stress amplitude of 50 MPa, so σD= 45 MPa. This value is higher than that of the specimens with a defect of 600 or 900 μm and 400 μm 9 defects. According to the results above, we can see that there was no difference in the influence on the fatigue limit for a defect between 600 and 900 μm, but the fatigue limit was reduced compared to the result for a specimen with a defect of 400 μm. However, in order to confirm this conclusion, more specimens with defects of 400 μm should be tested. The fatigue limit for specimens with 9 defects of 400 μm is smaller compared to that of the specimens with a defect of 400 μm as well, but there was no difference among the three specimens with a different edge distance between defects. However, among the three specimens with 9 defects, the specimen with the smallest edge distance failed from a small surface shrinkage cavity of 187 μm. For the four tested specimens marked “Defect free or shrinkage cavity” in , three of them failed from a shrinkage cavity. One of them failed at 50 MPa from a shrinkage cavity of 430 μm. Two specimens failed at 40 MPa from a shrinkage cavity of 187 and 283 μm respectively. The last one which failed at 40 MPa had no defect. It can be concluded that σD is not influenced by defects smaller than 900 μm in tension–torsion tests (σa,11
=
σa,12), with a load ratio R
= 0.1. Not like the case in tension tests, the influence of natural or artificial defect on fatigue limit of AS7G06-T6 is not obvious. Similar results can be found in the literature for A356-T6 For the two specimens with 9 artificial defects, the fracture surface passes by the three defects of the first row (see ). For the 3rd specimen failed from one shrinkage cavity, the secondary crack passes by the defects of the first row, too. shows the fracture surface with initiation sites of a shrinkage cavity and of a defect-free specimen.Under uniaxial and multiaxial loadings, there is a competition between single and complex defects. As shown in , 3 artificial defects of a size area
≈ 400 μm are places in a row with a dedge= 120 μm. As dedge< defect size, the cumulative size for the complex defect can be calculated using the new criterion proposed in this study. The cumulative area is equals to 800 μm. However, the crack which lead to the failure of the specimen was initiated from a shrinkage cavity, whose area was 706 μm. This phenomenon can be explained by the Kitagawa diagram. As can be seen from , the defects between 500 and 900 μm have similar influence on the fatigue limit. Furthermore, it has been concluded that a natural defect has the same influence on fatigue limit as an artificial defect of the same size. So if several defects coexist in a specimen, there will be competition among them under fatigue loading. Moreover, although not leading to the final failure of specimen, a crack that initiated from 3 defects was found after the test, which proved the viewpoint of defect competition in fatigue, too. shows the competition between a shrinkage cavity and artificial defects under tension–torsion. The specimen failed from the shrinkage cavity instead of the artificial defects. The shrinkage had a size in terms of area of 187 μm, which is smaller than that of a single artificial defect. As in the case under tension, this result can also be explained by Kitagawa diagram. As shown in , σD is not influenced by defects smaller than 900 μm in tension–torsion tests. So a specimen can fail from any defect smaller than 900 μm under tension–torsion. As the case in tension, a crack not leading to failure can be found from the artificial defects, as shown in Tension, torsion and tension–torsion fatigue tests with load ratios R
= −1 and 0.1 have been performed on AS7G06-T6 alloy. The “step loading” procedure was used to evaluate the fatigue limit. Both defect-free specimens and those with natural or artificial defects (size between 187 and 860 μm, single or complex defects) have been tested. The Basquin’s law is used to moderate the S–N curves. Crossland criterion and principal stress criterion considering Goodman idea have been tested for multiaxial loadings. A new definition of equivalent defect size for complex defect under fatigue tension is proposed.Crossland criterion gives reasonable results in tension, torsion and tension–torsion for two load ratios R
= −1 and 0.1. The average value of the absolute errors for 5 load types is 7%. The principal stress criterion considering Goodman idea gives less accurate results: the average absolute error for 5 load types is 15%.A criterion is proposed in this study to calculate the equivalent defect size for 3 defects:If dedge<area of one defect, the 3 defects are considered as a big defect and the ligament can be considered in the calculation of the size of the big defect.If dedge⩾area of one defect, the 3 defects are considered as 3 isolated defects.Using the parameter area to characterize the defect size, a shrinkage cavity has similar influence on the fatigue limit as a spark erosion machining defect of the same size.The fatigue limit is not influenced by defects smaller than 900 μm in tension–torsion tests (σa,11
=
σa,12), with a load ratio R
= 0.1 for AS7G06-T6.There is a competition between single and complex defects under both uniaxial and bi-axial loadings. In some cases, a specimen can fail from a small natural shrinkage cavity instead of from big complex artificial defects. Although cracks also initiated from complex defects, they did not lead to the failure of specimen.More tension–torsion fatigue tests need to be performed to estimate the Crossland criterion. The new criterion to calculate the equivalent size for 3 defects in tension should be confirmed using finite element method. The competition between single and complex defects should be studied in a microscopic scale.Mechanical alloying of carbon nanotube and Al6061 powder for metal matrix composites► A model was developed to predict CNT length and distribution during ball milling. ► The influence of welding and fracturing on CNTs during ball milling was identified. ► Significant CNT breakage occurred during the initial phase of mechanical alloying. ► CNTs embedded inside the metal particles were protected from milling media impacts.Mechanical alloying has been widely utilized to break down the clustered CNTs for incorporation in the metal matrix composites. However, the breakage of CNTs during the ball milling process degrades their effectiveness. Due to the challenges in collecting the CNTs for measurement, quantitative study of CNT breakage has been difficult. In this study, the CNTs with Al6061 powder were mechanically alloyed with high energy milling equipment. The CNTs from the surface of mechanically alloyed particles were collected and measured. Due to the difficulty in obtaining the CNTs embedded inside the particles, a mathematical model has been developed to predict the overall CNT length distribution in the composite. Significant CNT breakage occurred during the initial phase of the mechanical alloying due to the crushing of the clusters. The model predicted that no further change occurred in the overall CNT length during time greater than 1 h of mechanical alloying because most of the CNTs had already become embedded within the particles and were thus protected from further milling media impacts. A faster dispersion of CNTs and lower particle fracturing rate may help preserve the original CNTs.weight fraction of the agglomerated CNTsaverage nominal area of a single particle (m2)total nominal area of all particles (m2)net area embedded during mechanical alloying (m2)exposed surface area of all the particles (m2)weight fraction of the total embedded CNTsweight fraction of the embedded CNTs that are dispersedweight fraction of the total surface CNTsweight fraction of the surface CNTs that are dispersednormal distribution function of the CNT lengthcumulative distribution function of the CNT lengththickness of powder coating the milling balls (m)fraction of effective impact during ball millingaverage number of impacts required for a particle to be struck oncenumber of particles fractured in a short time period, Δttotal number of particles during mechanical alloyingnumber of particles welded in a short time period, Δtweight of embedded CNTs that are dispersed (g)weight of surface CNTs that are dispersed (g)the mean of the normal distribution of CNT length (μm)average time required to effectively impact all the particles (s)the variance of the normal distribution of CNT lengthoverall CNT length distribution at time tsurface CNT length distribution at time tsurface dispersed CNT length distribution at time tembedded CNT length distribution at time tThe extraordinary properties of carbon nanotubes (CNTs) make them a potentially promising reinforcement material for polymers and metals In this study, Al6061 powder and 1.0 wt.% CNTs were mechanically alloyed to analyze the CNT breakage. The CNTs on the particle surface were collected, and the CNT length and distribution were measured for different mechanical alloying durations. Because of the difficulty in collecting the CNTs embedded inside the particles, a mathematical model was developed to predict the overall CNT length and distribution during the mechanical alloying process.Al6061 particles (Valimet Inc.) and multi-walled CNTs from NanoLab® were used. The Al6061 particle and CNT information are summarized in . Two 12.7 mm diameter zirconia balls and the mixture of 2.97 g of Al6061 powder and 0.03 g of CNTs were placed in a 60 ml zirconia jar for mechanical alloying. The powders were mechanically alloyed in a high-energy SPEX 8000M mixer at a rotational speed of 1200 rpm for 3, 10, 20, 30, 40 and 60 min, respectively. After the powders were mechanically alloyed, 40 ml of alcohol (99.9%) was introduced, and the powder-alcohol mixture was ultrasonicated for 1 h. The CNTs detached from the Al6061 particle surface and floated in the alcohol solution. Several drops of the CNT-alcohol solution were separated and diluted until the CNTs dispersed under ultrasonic vibration. One or two droplets of the diluted alcohol–CNT solution were placed on an aluminum foil and dried naturally. The CNTs dispersed uniformly on the aluminum foil, and the morphology of the CNTs was analyzed. For the synthesis of Al6061–CNT composite, a semi-solid powder processing technique was employed. The composite was formed at 640 °C at which temperature the liquid fraction of Al6061 was about 30% A mathematical model was developed to predict the overall CNT length during the alloying process. The overall CNT length was obtained by considering the effects of particle welding and fracturing on the CNT length evolution process. With the collected experimental data, including the surface CNT length and length distribution, particle surface area, and embedded area, the overall CNT length can be predicted. A schematic of the ball milling process for CNT and metallic powder is illustrated in . Five assumptions have been postulated as the following:Only the dispersed CNTs can be embedded during the mechanical alloying process. Agglomerated CNTs will prevent the bonding between the metal particles The length distribution of the dispersed CNTs on particle surface (Φs,d) is assumed to be similar to that of all the CNTs (dispersed and agglomerated) on the particle surface (Φs). It is difficult to experimentally measure the length distribution of dispersed CNTs separately from that of all the CNT on the particle surface. Theoretically, the CNTs on the surface, dispersed or agglomerated, experience the same amount of energy input from mechanical alloying in given amount of time. Thus, the length reductions are assumed to be similar to all CNTs on the surface:The exposed CNTs and embedded CNTs are uniformly distributed on the surface area and embedded area, respectively.The particles are uniformly deformed during the mechanical alloying process.During a single welding or fracturing event, only two particles are involved—a single particle is fractured into two, and two particles are welded into a single particle.The details will be discussed in the subsequent sections.The fraction of agglomerated CNTs over all the CNTs in the composite is defined as a:where Wa and Wt are the weights of agglomerated CNTs and all CNTs in the composite, respectively. Then, the dispersion, d, is defined as:where Wd is the weight of dispersed CNTs. In other words, as long as the CNTs do not form an agglomeration, they are regarded as dispersed CNTs. d varies between 0 and 1, where d
= 0 means all the CNTs are agglomerated, and d
= 1 means all the CNTs have been dispersed.Ws is the weight of CNTs on the particle surface; thus, the fraction of surface CNTs to the total CNT weight is fs(t) =
Ws/Wt. Similarly, fractions of embedded CNTs are defined as fe(t), while the dispersed fraction of embedded CNTs is defined as fe,d(t). Therefore, the total CNT fractions become one:According to Assumption (i), at any time, t, all the embedded CNTs are dispersed, i.e. fe(t) =
fe,d(t). We can also define the fraction of dispersed CNTs on the particle surface as fs,d(t), following the prior definition of CNT dispersion, d:During the mechanical alloying process, fracturing and welding continuously take place. Welding of particles will increase the embedded area, while fracturing will expose some of the embedded area. The surface area, As, is defined as the exposed area of the particles. The surface area that was embedded during mechanical alloying is defined as embedded area, Ae (see ). Considering a short time period, Δt, the change of embedded surface area due to welding is ΔAw(t), and the exposed surface area due to fracturing is ΔAf(t). Then, the net change of embedded area can be calculated as:A positive ΔAe(t) means that particles are becoming larger due to overall welding, and a negative value suggests that the fracturing of particles is dominating.A flow chart summarizing the overall CNT length calculation procedure is shown in . With Assumption (iii), the change of embedded CNT weight in a short time period Δt can be obtained by:ΔWe(t)=Ws,d(t)ΔAw(t)As(t)−We(t)ΔAf(t)Ae(t)where Ws,d(t) is the weight of dispersed CNTs on the particle surfaces. The first term in the right hand side of Eq. is the contribution from the dispersed CNTs from the surface, and the second term is the loss of the CNTs from fracturing. Consequently, the fraction change of embedded CNTs that are dispersed can be presented in the following form:Δfe(t)=ΔWe(t)Wt=ΔAw(t)As(t)fs,d(t)−ΔAf(t)Ae(t)fe(t)Statistically, given two probability density functions p1(x) and p2(x) and weights w1 and w2 such that w1
> 0, w2
> 0 and w1
+
w2
= 1, the mixture distribution of p1(x) and p2(x), f(x), can be calculated as Therefore, the embedded CNT length distribution at t
+ Δt can be represented by the mixture distribution of CNTs newly welded from the surface and leftover CNTs originally embedded inside the particle. The weights were calculated through the amount of newly welded CNTs from the surface (i.e. ΔAw(t)fs,d(t)/As(t)) and leftover CNTs originally embedded inside the particle (i.e. fe(t)
ΔAf(t)fe(t)/Ae(t)). Thus, from Eq. ϕe(t+Δt)=(ΔAw(t)/As(t))fs,d(t)fe(t)+(ΔAw(t)/As(t))fs,d(t)−(ΔAf(t)/Ae(t))fe(t)ϕs(t)+fe(t)−(ΔAf(t)/Ae(t))fe(t)fe(t)+(ΔAw(t)/As(t))fs,d(t)−(ΔAf(t)/Ae(t))fe(t)ϕe(t)A factor k is introduced to simplify Eq. , the rate of fe(t) can be derived as the following:dfe(t)dt=(fe,d(t)−d)11−kdα(t)/dtα(t)+fe(t)k1−kdα(t)/dt1−α(t)d(ϕe(t)fe(t))dt=(fe,dt−d)11−kdα(t)/dtα(t)ϕs(l,t)+ϕe(t)fe(t)k1−kdα(t)/dt1−α(t)The overall CNTs length distribution can be calculated with the following equation:where ϕ(l, t), ϕs(l, t) and ϕe(l, t) are the overall CNT length distribution, surface CNT length distribution, and embedded CNT length distribution at time t, respectively.The particles continue to deform during the mechanical alloying, and therefore, the change of embedded area, ΔAe(t), in a short time, Δt, is not equal to the surface area change, As(t
+ Δt) −
As(t). The simplified welding and fracturing mechanisms are shown in (a)–(c), the particle evolution process is manually divided into two steps: deforming ((a), a surface area, ΔAe(a), will be embedded at time t. Because of the deformation from ball milling, this surface area changes to ΔAe(b) in (b), which is embedded during the welding process at time t
+ Δt (In ΔAe(b)Ae(b)+As(b)=As(b)−As(c)Ae(c)+As(c)(a) and (b), the following relationship can be obtained with assumption (iv): is the ratio of the welded area and fractured area during mechanical alloying. A flow chart describing the procedure to determine k is shown in . First, a rule between the welding/fracturing of particles and the change of surface area is established as shown in Eq. . During a short time, Δt, assume nw number of particles welded, nf number of particles fractured, and nΔt=nw+nf is the total number of particles are involved in fracturing and welding. According to Assumption (v) and as indicated in , the change of embedded surface area due to welding is nw×Ab(t) (Eq. ), and exposed surface area due to fracturing is 2nf
×
Ab(t) (Eq. ), where Ab(t) is the average nominal area of single particle.The number of particles involved in fracturing and welding in a short time period Δt is calculated as shown in Eq. . The average time interval between effective impacts involving all the particles can be defined as τRb and ρb are the radius and density of the milling ball, respectively; ρp and Hv are the density and hardness of the powder, respectively; CR is the charge ratio (mass of balls/mass of powder); hc is the powder thickness coating the milling balls; v is the ball impacting velocity; Γ
=
Ible is the frequency of effective impaction that can fracture or weld the particle; Ib is the ball impact frequency for the system; and le is the effective impaction fraction.where Ab,t(t) =